The Makings of a Compassionate Code

INITIAL AI MUSINGS

We create technology, and in turn, it creates our society. We may like it or not, but our technological creations define our lifestyle, decision-making, economy, relationships, and yes, even moral compass. We can say we are even birthing a new ethical identity.

When East-West Center had its Symposium on Humane Artificial Intelligence (Cultural and Ethical Diversity and The Challenges of Aligning Technology and Human Values) last September 7-10, 2019, at Imin Conference Center in Honolulu, Hawaii, it was attended by: Futurists, Educators and Professors from various backgrounds, Philosophers, Digital/Data Designers, Silicon Valley professionals, Business Consultants, and so on. You can imagine the dynamics and the wide-ranging sharing of ideas and suggestions that went on the whole time. AI and technology are as varied as the number of cultures and educational backgrounds out there. Sometimes we think that technology should be digested linearly, but to do so will be one of our biggest mistakes. Our differing perceptions and reception of it must be taken into account even upon the conception of an AI program. Because if Equity is called into operation, then social respect and justice conversations must have contributions from different cultures, religions, generations, and philosophies. This is where an interesting observation takes place in the presentation of Danit Gal. Her talk titled: “Human-Centricity in AI: Between East and West” exposes the cultural experience, with respect to AI, specifically in South Korea, China, and Japan. South Korea puts credence on Humans over Machines and treats AI plainly as a tool that functions only as an enabler and not a detractor. This is in line with their Anti-Social development policy, which highlights the balance between individualism and the collective good. China sees AI as “conscious intelligent living becomings” and is seeing a Human-AI Harmony in the near future. Their Buddhism background made them approach AI techno-animistically wherein they set up principles for humans on how to treat AI; plus, there is a fusion of traditional culture and modern technology. An example of this would be Microsoft’s Xiaoice where which is more than just a chatbot, but rather an AI being infused with emotions. There is also an example of the Buddhist Monk “who” performs death rituals for the deceased whose families cannot afford to pay an actual monk. A quick story on this…when I attended a Futures Convention 2 years ago in Brussels, Belgium, one workshop of ours spewed a reality we projected to happen in about 15 years’ time. You guessed it! Our group forecasted Religious Leader Robots who will be administering the rites and rituals of different religious groups. So, when I heard about the Robot Monk, I did not expect it to be happening within 2 years’ time!! Going back, we now witness Japan’s treatment of AI as the opposite of South Korea’s “AI as tool” outlook. “Japan’s Society 5.0” portrays AI as a partner or a potential equal. This land of the rising sun sees AI integration and co-evolution as necessary and inevitable. Their 5.0 version of reality will be one that co-evolves and co-exists in a fully technological-enabled society. They also aim to utilize AI to address their pervasive national loneliness and super-aging society. Now, this will be interesting to anticipate; how do you address loneliness with an “unfeeling” medium? Would their famous holographic wife be enough to ease the pain? Or is it pushing people towards a psychological deadlock wherein the Japanese relieve themselves of social obligation and transcendence? It seems in this circumstance that AI just amplifies the illnesses of our society. Moreover, this kind of dependency results in us de-skilling ourselves; ergo, decreasing our cognitive abilities. In the efforts to humanize AI, we must be careful not to dehumanize ourselves in the process.

Let’s move on to more ethical conundrums. Seeing that machines run with our human intentions embedded in them, is there even a possibility of uprooting human intentionality? Is branding the marriage of our Attention Economy and Security Surveillance as Smart Capitalism smart, or in a way deceitful? Are we seeing the free mining of data, its monetization, and peddling as fair to all? In the midst of this Intelligence Revolution, I am personally worried that there might be a huge possibility that AI can replace human capacity and responsibility. In our state of constant conscious evolution then how do we create checks and balances on AI tech whose exponential growth is blowing our minds every minute? Can, say, a global AI entity keep up in governance? In this 4th Industrial Revolution, how do we encode ethics and humaneness in our digital DNA? Are we looking at a “Black Mirrors-que” kind of future, or are we a hopeful lot and trusting of human beings’ capacity to produce globally sound ethical parameters on AI development all the time? This is the part where it got me thinking, how do we develop the foundations of “Conscious and Compassionate” coding? Here I am banking on what De Kai said in his “Prescriptive versus Descriptive AI Ethics” presentation. He stressed that the AI mindset may be the only effective solution to AI problems. How to grow a “good” AI mindset? He says let’s treat and raise it as how we do our children. From instilling respectful language and respect for opinions, to make them honor truthfulness via fact-based checking, and one that can co-evolve with society.

THOUGHT PROGRESSIONS

On the second day of the symposium, the room was divided into small groups to do AI Scenario-Building. Here, participants utilize fore-sighting tools to co-create four future scenarios; and the individuals’ identities, work, and ethical commitments serve as drivers for the scenarios. Scenario 1 was about the “Rise of the Robots” and some talks on social and job inequalities; Scenario 2, the “Structural Dimensions of AI” underscoring its influences on different structures of society; Scenario 3 was about “Tech Company Hegemony” showing technocrats economic domination and all that jazz, and the 4th and last one, a “Big Brother” scenario where not only AI controls our behavior, but that “they” see us all the time, every time. It was interesting and curious to hear the groups’ outputs during the Plenary Discussion of the scenarios—even more so during the AI Scenario Strategy Discussions wherein participants are trying to make heads and tails of the 5 key strategic domains for actualizing commitments. The domains are Dimensions of Governance, Equity: Inclusion, Human Flourishing, AI for Community/Altruism, and Education. I feel these domains were expected for these are the areas that are both the stimuli and aftermaths of the AI revolution.

Everyone in the symposium knows the benefits of AI to humanity, but everyone is secretly dreading the progression in this area because we know we need to assume exponential responsibility as well. We acknowledge the need to take on the responsibility, it’s just that our present state of technological dependency coupled with repeated disorientation gets us stuck in a swirl…on a loop. That kind of hyperbole of a situation already makes us dread the future of AI and leaves most of us, especially the AI illiterate ones, very fearful. This is the reason why when we mention the future of AI, a lot of people think about the alarming scenarios they picked up from sci-fi movies and TV shows. In terms of educating the mass about AI, I feel it is also essential to reveal all shades of it in order to move forward with the dialogue. There should be balance in any kind of planning towards progression and solutions. There needs to be the right amount of caution and hope in the mix. We ought to highly consider too that AIs are integral, active, influential, learning, and imitative “members” of our societies. They are still “intelligence” after all. Despite its operative artificial label, it’s still created to think and act (within bounds) like us. The reorientation of thinking: “Technology is the US!” must be one of the key messages in AI 101.

One thing I wanted to highlight more during the Symposium was the recognition of the significance of Empathy. If we are to make AI humane, then we should not be tackling the matter using only identical perspectives.  Diverse and multi-perspective openness are the first steps to discovering how to encode equity and integrity. This must be inclusive of all races, genders, cultures, religions, and ages. Take for example our constant overlooking of the children factor —children who are very much impacted by the digital age and AI revolution as presented by Sandra Cortesi’s“Youth and Digital Life: AI Ethics for the Next Generation” talk. We have repetitively disregarded the youth’s privacy and safety, their inclusion, their mental and emotional health, and well-being, and stifled their creative motivation to do artistic work. The youth’s voice is somewhat lacking in existing digital and internet-of-things debacles. We made ourselves believe that adult values and reality are the only quantifiers in the equation.

ON CONCEIVING OPERABLE STEPS

Everyone, regardless of their prejudice and biases, must realize that the kind of AI future we want depends on our capacitating ourselves practical and ethical-wise. No one should be left behind because AI is a commodity and like all kinds of articles of trade, it affects every facet of living. We, as the consumers, as always, dictate the characteristics and quality of products and services. As Alexander Means described in his presentation titled: “Computational Cities and Citizens: Silicon Valley Visions of the Future of Learning and Creativity”, educational development is indeed constrained by a value structure subordinated to 21st-century capitalism and technology. True, it seems that our hypermodernity situation puts Market and Technology outside of human agency—out of our history even. Acknowledging this reality and not underestimating it help us demand AI that is equitable and principled. We should anticipate psychological and philosophical predicaments too. If we wanted something humane then we should not overlook the consequences or effects it can inflict on the human psyche; else, we are just simply tackling very human problems using very technological tools. This does not compute.

On the last day, East-West Center challenged us by asking us how to go about the target of making AI scalable ethical. Reflecting and mulling on the presentations, workshop outputs, and remarks being thrown around the room for the whole of the Symposium made me think of an evidence-based practical application. I stand by my idea of having the East-West Center push the envelope by working with the government (or various governments), community, and different sectors in order to find out what will prove fruitful as a Humane AI model. I am quite certain that there are numerous talks, workshops, or symposiums that have been done, and will still be done. Talks that analyze and scrutinize the nitty-gritty—and on a macro & micro level—of AI development are (almost) rendered passé. We will constantly be both glossing and deep-diving over different variables that we think make up a Humane AI. However, I believe with the level of subjectivity that each individual possesses, plus the multitude of experiences, belief systems, and moral codes that exist in the world, we can never get close to the compromised (at least) equitability that we so want to achieve. This is why I revert to the seemingly simple notion of prototyping a Humane AI system via a city for both research and benchmarking.

Throughout the talk, the dimensions of governance bring weight to the discussion, especially in the search for resolutions or some kind of order. I see other dimensions’ need for governance in order to thrive, and even operate—as in the case of worldwide automatic equity and inclusion. As Danit Gal puts it, there is utmost urgency now in dealing with AI; ergo, actions cannot be delayed anymore. As her cybersecurity background prods her to think about the massive effect that a rogue program may perpetrate. That statement alone made me imagine (and even I could not even imagine!) how grave the effect can be. She also informed the group that no legal implementation has ever caught up to the AI systems yet. This is what further bolsters my idea of utilizing cities to be testing grounds for AI development. It is not far from what is happening right now in South Korea with their Samsung Village. Cities indeed can be catalysts for momentous global change. If we still rely on the old route of creating a global force to govern and implement AI policies, then it will be a never-ending squabble because as I’ve said, we have got way too many irreconcilable differences. Take a gander at the Climate Crisis. There will always be disagreements amongst self-governing entities for whatever intentions and reasons. We ask too, “Who’s watching the watcher?”. This makes us expect scenes where there will certainly be intense colossal power struggles. If we want to be AI-ready, then at the very least we need to study an intelligent governed system that has adapted to an ever-changing system. What better way to have that than by putting the theories and assumptions to a test. This kind of localized trialing will put a spotlight on the inadequacies, injustice, glitches, and all of extant AI systems. It’s much easier working out the kinks on a city level rather than on a global scale. I am in the thinking that we should rather be afraid of city-level corollaries and setbacks rather than be blindsided by a global occurrence when we unleash AI systems, we thought looked good in theory or on paper. Besides, when do we learn best but through our own actual mistakes and experience. Moreover, I am very critical of the time component. I feel that even if there is a governing worldwide AI entity, it will still take the time to implement and carry forth its function for one, global efforts take time. Secondly, they have their partialities to thresh out, and again, that takes time. By the time they met in the middle a new set of AI products and problems would have sprouted up.

This tangible proposal of mine was also inspired by Alexander Means’s talk on Utopic Impulse, which lies on the acts of Solutionism and Collaborationism. I feel that the level of collaboration to happen inside a model city will stand strong (as long as no city is coerced, and this can probably be remedied via incentivization by both government and tech companies) due to the desire of cities too to be a yardstick of any kind of success, or in this case, pioneering. Means also talked about “shareable cities” and mentioned that finding the common paves the way to the proliferation and production of societal values. If our prototype city aka Humane AI Model City ver. 1 is given the goal of participation in order to develop a humanized AI, then the citizens in turn —consciously or subconsciously—pick up on the need for injecting Empathy. Consequentially, they will demand nothing short of a mindful AI future. This might be the birth of a new era of technological consciousness that we haven’t witnessed nor experienced before. Allowing us to put a microscope on this said experimental city will help us understand more about the ever-evolving complex relationship between technology and Man.

Leave a comment