More of Link's thoughts [Background] [Prep]

This is another thing he's noodling on, late at night at the Gale house. Since we came together as a team, we've faced a lot of adversaries. We've been attacked by people on a mission, people just looking to have fun at others' expense, people with a grudge. We've interacted with people who were greedy, selfish, well-intentioned, eye-opening. Superhumans. Aliens. Powerhouses, physically or economically. People willing to hurt, kill, or degrade to get what they want. But still people. People don't do things for no reason. The Lövheim cube of emotion is a model that maps feelings to neurotransmitters in the brain. It proposes eight basic emotional states. Shame or humiliation, distress, fear, anger, contempt or disgust, surprise or shock, joy, and interest or excitement. These simple elements form complex combinations and have powerful effects. From outside, it's easy to look at someone and think "oh yeah, they're just feeling X" and move on. But your own mind is your whole world, your universe. Strong feelings can dominate a person, sometimes for life. Mine certainly have tried. A mugging, an alien invasion, and an ambiguously worded email from your lover can all make you feel fear. It doesn't matter if the fear is justified or not. Perfectly sane people have phobias about perfectly harmless things. But if you have power, and you feel fear, you'll use that power to shield yourself from the danger, or to attack its source. Anger, disgust, and other emotions can provoke similar reactions. We use the power we have to protect ourselves. Sometimes we think - right or wrong - that we're morally justified in such action. But psychologists have demonstrated that moral reasoning can also arise from disgust - another emotion. I think a lot of people are afraid of us, disgusted by us, angry at us, maybe humiliated at the thought of us. I think some people feel joy or excitement about us - maybe the people whose lives we've saved, I'd like to think. But the ones with power, the ones who see us through those darker feelings, they're going to lash out. We could just fight them, but I hope there's another way. I think we can save lives - our own, the lives of innocents, and even those of supervillains and other foes. We do that by changing how they feel about us. That means listening, understanding, and helping if possible, no matter who you are. Do you hate us, or fear us? Come talk to us. Do you covet my modular tech, or Jason Quill's polymath expertise? Come tell us how you want to work with us. Do you think superheroes are bad? Warn us of what to avoid. We won't agree all the time, and that's to be expected. But let's try.
"Link's Message To Supervillains" -- boy, I'll bet Agent Waters will love that. I don't know something like that will work -- but I don't know it won't  work.  And it doesn't have to work every time.  Not sure about sharing technology. Heck, not even sure if talking with supervillains, cooperating with them, doesn't makes us complicit if they do something bad afterward. On the other hand ... maybe the lines aren't so black and white as I used to think. I dunno. "Polymath," huh? I never even liked calculus.  No, I'm kidding, I know what polymath means. 

Edited 1511996854
*** Dave H. said: Not sure about sharing technology. Heck, not even sure if talking with supervillains, cooperating with them, doesn't makes us complicit if they do something bad afterward. Link's actually got a specific suggestion here! Let's say that someone - other than Rosa Rook, but in a similar situation - approached Leo and the team and said "we want to buy robots". Leo's counter-offer would be the non-sentient but still capable animal units he's designing, managed by a spin-off of himself. Leo retains legal rights to his creations, the company has a person on site who will direct and maintain the units, and who can hold the company accountable for their use. He basically thinks that any tech he produces needs to be at least somewhat autonomous, with protective instincts or some other kind of moral compass that can't be overridden from outside. Friends, not weapons. I think there's a legal angle here that anybody with halfway-good motives will appreciate: personal accountability. If a human security guard shoots up the airport, that person is liable. What happens if your remote-operated drone does the same? If Leo can get the courts to recognize that his creations are autonomous, they also bear the burden of bad behavior, rather than the company. I'm sure the shareholders would be grateful. Even people like Troll could put their powers to amazing use. That guy wants fame and attention? He could be loved world-wide if he did better things. Reaching people like that will be hard, but Leo wants to try. Ultimately it'd be amazing if he could reach his dad somehow. I think he's still too afraid to make this his primary goal, but if he survives whatever is coming, he'll grow more confident.
This gets into the weeds a bit, but by and large, comics assume that  Reed Richards Is Useless . I know the meta reason for that (to keep comic universes from devolving into an unrecognizable singularity), but the in-universe reason seems to be "too much reliance on expensive Phlebotinum". Leo's tech isn't that at all. It's based on easily-obtained light elements, built into useful combinations. It's actually really low-tech, but durable and reliable, and should be easy to mass produce. It's the AK-47 of the hyper-tech world. It may or may not happen in this game, but Leo's aiming for that future where everything and everyone can benefit, because the cost of utilizing this tech is so low.
Actually now that I think about it, it'd be more personally entertaining if it didn't come to pass here - but that someday there was a followup campaign, in a rollicking Guardians of the Galaxy style near future, where all this stuff worked out for Earth but shit was getting real in the greater galaxy.
This gets into the weeds a bit, but by and large, comics assume that Reed Richards Is Useless . I've used similar ideas in the comic book text fiction I've written -- usually along the lines that one-off prototypes are one thing, but production models are very different, heavily leavened with it's all fundamentally  reality-bending by the inventor , channeled by his just-nearly-plausible inventions, so they won't work for anyone else, let alone in mass production. Which is a cheat but so are super-powers in the first place. Anyway ... As discussion of autonomous vehicles and trolley car dilemmas demonstrates in the Real World, I'm not at all convinced that Leo's autonomous bots dodge liability issues. If they are truly treated as autonomous enough to act on their own, then whoever is owning / employing them has to take out liability insurance, he same way as they would a human employee. If how they were programmed to react led to injury or death (even justifiably), presumably not just their employer but possibly their manufacturer would be dragged into court. (It may be, in a world where robotics is much more advanced, the law is different, of course.) I'm sure the Quill Foundation lawyers would have some counsel on this. Indeed, the Foundation might be interested in going into business with Leo (or maybe even hiring him). 
Yeah, and all those things may not be interesting for most groups to play through. I caught myself too late. :) But the alternative, a side or future world where these issues already got hashed out, is still interesting to think about. More noodling on that here, if anyone's interested. The link allows comments.