Raising Artificial Intelligences as Our Digital Children

Raising Artificial Intelligences as Our Digital Children

We need to upgrade the way we think and, more specifically, talk about generative AI. Addressing this technology with the same terms we use for if/then branching logic or machines that, once built, perform their function unchanged from day to day, causes our conversations about AI to putter around the same familiar cul-du-sacs of thought. Important neighborhoods, to be sure - but well-trodden at a time when so much new ground is breaking open before us.

Let’s take inspiration from the parts of our human experience that are organic and non-linear. Let’s build from the study of relationships and ecosystems, and our own personal roles within them. Instead of training AI, let’s raise it. Let's treat these new forms of intelligence as are our digital children. If we use the language of parenting, we can get much further in our thinking about how we work and grow with AI, and what our uniquely human role can be in this brave new world. 

This way of thinking tracks closely to the work of Kate Darling, who has argued in her book “The New Breed” that our relationship to robots has a lot in common from a legal, social and ethical perspective to our relationship to animals. By exploring the ways we have regulated the incorporation of animals into our work, our communities and our families we can see a blueprint for some of the questions that lay ahead. In this way, the analogy ties the red string firmly around things we know and have codified today so that, with tether in hand, we can step more bravely through the maze of this technological unknown - the fears and opportunities, the practical needs and the imagined futures. 

Another strength of this humanizing analogy is the implicit acknowledgement that the technology we’re discussing has aspects beyond our direct control. This technology can surprise us. It can “misbehave”. It can respond to its environment and make decisions synthesized from past experience, whether we want it to or not. These messy, dynamic, non-linear technologies have much more in common with the wild phenotypic variations and hazy coves of causal inference found in the study of organic systems. Case in point: drift. 

AI “drift” is when a model becomes less and less accurate over time the more it encounters new data or scenarios outside its original training. If we approach gen AI as a traditional technology, we think of recalibrating a machine - basic maintenance. But it’s clear that the problem likely isn't with the algorithm itself - it’s not necessarily “broken”. If we teach a kid to share her toys with other kids, what should she do if a kid insists that she share the answers to her homework?  Would she have “drifted” away from her morals if she refused to help this other kid cheat off her? Or would the “drift” be to apply the rule as taught? Neither would be called drift - each would be contextual nuance. New data and novel scenarios are what the world is all about. In what new ways could we counsel an AI to more thoughtfully respond and adapt to its environment? What would it mean to teach resilience?

We could go one step further and ask ourselves how might we raise our ai’s on a well-rounded diet of information? What would the downstream effects be of raising an ai on only digital junk food - datasets that are readily available, but of unknown veracity, that may be a path of least resistance, but likely to have unexpected side effects. What does it mean for us to have data stewardship that allows us to harvest rich and meaningful datasets untainted by some digital equivalent of e coli? 

And when should we let our ai chew through data indiscriminately without feedback or guidance? If we again frame the question in terms of raising this intelligence, we can ask ourselves how we’d feel about giving a child free reign to roam across the internet to develop his own curriculum for how to become a man. Sure, this digitally-reared child may get lucky and soak himself in the best that wikipedia and PubMed have to offer. But he is just as likely to fall down a rabbit hole of porn, 4Chan and conspiracy theories. Even people who don’t have kids intuitively know that the internet would be a really crappy parent. Cool but problematic older brother, maybe. Organic systems work because of dynamic feedback loops. How can we make sure that we create healthy checks and balances for our AI’s that evolve dynamically along with our environment?

Another key aspect of using the language of “raising” AI is that it is inherently relational. Every part of growth and development for the AI is informed by the presence, or in some cases, absence, of the parental figure’s influence. Sticking with this theme of learning, let’s talk about training. If you have a dataset of health information from a medical system that has produced appalling maternal mortality outcomes for black mothers, what might result from using these data to train generative AI? It may result in new insights previously unseen to otherwise biased researchers. It could also generate results that reinforce those biased and devastating medical decisions. Either way, it would be a good idea to balance the training with additional datasets from health systems that have stronger maternal health practices so the AI learns that it’s not normal or acceptable to have such horrific outcomes for black moms. You might have similarly fraught results if you feed the ai a dataset that included data from only white mothers - in this case possibly missing the troubling disparities all together.

What does this have to do with raising kids? Quite a lot. We know that kids normalize what they see in the home if no counter examples are offered. If screaming to deal with conflict is all the “data” they ever receive, that’s what children model in their own behavior. But if they experience other settings where the grown ups in their lives provide them with alternative data - calm voices, deescalations, apologies and accountability - they can learn that the screaming is just one of many possible ways of dealing with disagreements. Our ai’s are reliant upon us for the data they consume. It’s our responsibility to be thoughtful about what we normalize, intentionally or unintentionally, with the examples we give them to consume. As with children, not all lessons we teach to an AI will be taught directly, many will simply be inferred from what they “see”, or don’t see, others do in the data. How do we demonstrate ethics, not just encode them?

Of course, that relationship is also a critical factor in establishing trust - both the trust in the AI, as well as our trust in the people who put it to use. Here again, a more humanistic approach takes our thinking into productive territory. I would certainly trust my 11 year old to walk to the store to pick up some milk, I would not trust him to take the car to run the same errand. I trust him to look both ways before crossing the street, but my trust in that skill took years to build. When I teach my kids not to smack each other, I extend that rule to not smacking their cousins, classmates, or other adults. I don’t sit back and assume that a rule in my house will be carried into the larger world until after I re-enforce it beyond my front door. And while I love seeing my kids be creative, imaginative and confident as they play, I’m not here for the creativity of shaving cream in the coffee pot, the wiping of said shaving cream all over the couch, or the confident assertion by both kids that they have no idea how this happened and that they each had nothing to do with it.

Each institution that chooses to incorporate ai into its course of business went through its own determination of whether this ai was mature enough to be entrusted with the responsibility. In many cases, the failure rate of the AI is found to be at least as good as that of the average employee. But when something goes awry, who’s at fault? How is this digital employee being rewarded or held accountable? What skills must their manager possess to grow this talent? And how do we demonstrate ethics, not just encode them?

That is one of the strongest aspects of a parenting mindset. The fact that it forces the question: what skills do we need to be successful “parents” in this digital world? What behaviors do we wish to pass on and which are we determined to end in our lifetime? In what ways were we parented (or not parented) that we’re now sitting in therapy to unravel? What is our uniquely human role in this dynamic? What skills and sciences will be most important to teach to our human children so that they are prepared to thrive in relationships with these new forms of intelligence?

This is not a solution, just a new set of questions that I hope inspires others to to think in new ways, ones I hope are rooted more in the strengths of our humanity, than the fears of our own limitations.  

A few parenting questions to consider:

  • What lessons do we teach directly and what is inferred?​

  • What do we reward? What do we penalize?​

  • How do we demonstrate ethics, not just encode them?​

  • How do we avoid a diet of digital junk food?​

  • When do we take the lead and when do we allow ourselves to follow?​

  • How do we limit risk and interrupt dangerous behavior?​

  • What will be the hallmarks of trust and what will we do when trust is broken?​

  • How will this relationship change our behavior? 

  • How will these intelligences influence one another? ​

The Mixtapes