The Battle for Ethical Intelligence⚖️🤖

AI's future splits: colossal models from tech giants vs. sustainable pathways with large language models. Societal challenges abound, from wealth shifts to research migration. Amidst this, several issues guide our journey through technological advancement.

The Battle for Ethical Intelligence⚖️🤖
Photo by Mojahid Mottakin / Unsplash

Artificial intelligence has emerged as one of the focal points of contemporary discourse captivating the attention of governments and private institutions alike. Often heralded as a magic bullet, AI has tremendous potential to transform various sections from benign to more dubious and reshape the very foundations of modern society.
So, what is the future trajectory of AI? The imminent evolution seems to suggest Artificial General Intelligence. However, its development could splinter into two routes. On one hand, big tech firms will steer the creation of larger, more resource hungry all-encompassing models. Conversely a more sustainable direction would be a large language model (LLM) that orchestrates various ML and non-ML APIs enabling the distinct capabilities of each to shine. This kind of an approach to AI would democratise AI to be less scale and capital dependent with both small and large players creating value on different parts of the AI ecosystem.
Considering this trajectory, it is imperative to consider the societal ramifications. The AI100 report hints at a redistribution of wealth in favour of those in possession of AI based tools. The migration of researchers from academia to industry could further debilitate this situation by biasing research objectives. A combination of these could severely alter current socio-economic structures, exacerbate information disparities, marginalize certain groups and amplify misinformation. Those that hold data would be the new bourgeoisie of this era.
Another research frontier that would inevitably be necessitated by proliferation of AI would be ‘Explainable AI’. While the contemporary deep learning models outperform traditional models, they remain enigmatic in their underlying decision making to the stakeholders. A framework for elucidating these models would help these entities discern the influence of various data points on outcomes.
Furthermore, as AI becomes integral to decision-making, establishment of debiasing procedures is paramount. This coupled with universal ethical standards (!!), comprehensive legislative and policy measures to safeguard those susceptible to AI disruptions will be integral to ensure a smooth transition to the new era.
Sustainability is another concern. With AI models becoming larger and more resource hungry, their carbon footprint swell making it impossible to turn a blind eye. Research must also support specialized hardware and further miniaturization of transistors to address these challenges by optimising hardware usage, enhancing chip performance and ensuring reduced energy consumption.
‘Machine Unlearning’ is gaining traction in research and industrial circles. With legislations around the right to be forgotten, there has been a growing body of research around algorithms to expunge specific portions of the dataset from previously trained models. With Google holding their first challenge as part of NeurIPS 2023, this area is bound to gain momentum.
Lastly, there is another facet to this discussion that is somewhat more understood. The unchecked accumulation of insurmountable amounts of invaluable data during a time when legislature has not matched pace with the developments in AI. As we envision the future of AI, it is equally crucial to deliberate on the fate of this data we’ve given away in exchange for that free messaging service.

References:

Michael L. Littman, Ifeoma Ajunwa, Guy Berger, Craig Boutilier, Morgan Currie, Finale Doshi-Velez, Gillian Hadfield, Michael C. Horowitz, Charles Isbell, Hiroaki Kitano, Karen Levy, Terah Lyons, Melanie Mitchell, Julie Shah, Steven Sloman, Shannon Vallor, and Toby Walsh. "Gathering Strength, Gathering Storms: The One Hundred Year Study on Artificial Intelligence (AI100) 2021 Study Panel Report." Stanford University, Stanford, CA, September 2021.
Bubeck, S., Chandrasekaran, V., Eldan, R., Gehrke, J., Horvitz, E., Kamar, E., Lee, P., Lee, Y.T., Li, Y., Lundberg, S. and Nori, H., 2023. Sparks of artificial general intelligence: Early experiments with gpt-4. arXiv preprint arXiv:2303.12712.
McKay, C. (2023) OpenAI’s function calling & the future of GPT, Maginative. Available at: https://www.maginative.com/article/openais-function-calling-the-future-of-gpt/ (Accessed: 14 September 2023).
Pedregosa, F. and Triantafillou, E. (2023) Announcing the First Machine Unlearning Challenge, – Google Research Blog. Available at: https://blog.research.google/2023/06/announcing-first-machine-unlearning.html (Accessed: 15 September 2023).
Wang, Z. (2023) What is machine unlearning, and why does it matter?, Deepgram. Available at: https://deepgram.com/learn/what-is-machine-unlearning-and-why-does-it-matter (Accessed: 15 September 2023).