By Prof Dr Simon Mak
In August 2023, Universal AI University became the first university in India to launch a campus-wide curriculum centered on AI. With a mission to “Groom global citizens who will positively impact the world using AI”, the university continued its foundation of ethical and socially responsible values from its prior days as Universal Business School, India’s first green university. Given such a background, the concept of “Ethical and Responsible AI” was a natural extension of the school’s core values. But what does this mean?
AI, like all other preceding technologies, faces a dilemma – how do you develop the technology to make a positive impact while knowing there is the real possibility of bad actors using the same technology? At the same time, AI differs from other previous technologies in that it can train itself, and thus evolve and create unintentional consequences with its results. As you think about how to develop AI in an ethical and responsible way, guardrails regarding data privacy, security, and safety take top priority.
Next, creating methods to assure the outputs of the AI are authentic, credible, and legitimate stand as the second priority.
When you think about guardrails – data privacy, security, and safety are often the most pressing issues. With data privacy, this often refers to the data that you are using as inputs to the AI model, both public and personal data, and being protected from having your personal identity revealed, well as the governance of the data in terms of physical storage – local environment storage or data residency by country storage. Regarding security, sometimes referred to as cybersecurity, this refers to measures taken to prevent data theft and hacking. Finally, regarding safety, this often refers to developing filters that address speech (hate, dangerous), sexually explicit content, and harassment.
Next are issues related to ensuring the authenticity, credibility, and legitimacy of the results, ie, not “fake news”. Sometimes this fake news is intentional and other times it is created due to user error, such as not “grounding” your AI query, meaning combining the information from the AI model with latest search engine results. It is also necessary to create the ability to “source” the results by displaying the links as evidence that the AI model used as data inputs. Lastly, there are times that an AI model produces “hallucinations”, which are results that are completely unexpected and is unclear how this happened. Often hallucinations can be reduced by decreasing the “temperature” or the creative freedom characteristic in the AI model execution.
By proactively teaching the issues of creating guardrails for data privacy, security, and safety, and also ensuring the data and results are authentic, credible, and legitimate, universities can education the next generation of AI developers to be aware of ethical issues in AI development, making students more responsible as developer and in AI models and platforms.
Prof Dr Simon Mak is the founding vice chancellor of Universal AI university in Karjat- Mumbai. He joined the university in August 2024 as the first American vice chancellor in India after a 20 year stint at the SMU Cox Caruth Institute for Entrepreneurship in Dallas, Texas, where over the last five years he oversaw the center, department, and developed programs on blockchain entrepreneurship and space entrepreneurship. Prior to SMU, Dr. Mak worked in a Silicon Valley startup that IPOed, and also started his own healthcare IT dot.com, and then worked in a Linux software company and selling the company to a Japanese strategic investor. Dr Mak received his BS/BTech in Mechanical Engineering from the Massachusetts Institute of Technology (MIT), and an MBA and PhD in Systems Engineering both from SMU. He can be found at @profsmak, and the views expressed in this article are his own