Interviews

Navigating the Future of Business Success: The Power of AI-Driven Decision-Making

In the current era where data holds immense influence, it becomes crucial to acknowledge the invaluable contribution of human intuition, experience, and imagination. As the prevalence of data-driven decision-making continues to grow, recognizing the potential of a synergistic relationship between data and human intellect will enable companies to become digital transformation partners. We had Sunil Senan, Senior Vice President and Business Head, Data and Analytics, Infosys, share insights into how enterprises can harness the strengths of both human insight and data-driven analytics to make informed, ethical decisions that benefit all stakeholders involved.

  1. How will AI-driven decision-making drive the new frontier for business success?

From automating regular tasks to predictive analytics and risk assessment, AI is setting new standards in efficiency, accuracy, and error reduction. Enterprises are increasingly focusing on Responsible AI and driving AI trustworthiness. At Infosys we build solutions for our clients across functional areas helping them take decisions. For instance, we help clients make better decisions about media budget allocation, product features and colors, product formulation, and inventory stocking, leading to tangible business benefits such as increased sales, reduced costs, and improved customer satisfaction.

For example, a firm selling industrial coolers and air conditioners is able to now plan procurement and production many months in advance compared to their competition due to AI based forecasts. Creating a strong differentiator on the efficiency frontier.

A large metallurgy firm is going to be able to provide safer products due to AI based corrosion prediction for specific environments. Creating strong differentiator on the effectiveness or quality frontier.

  1. Why is it important that organizations are responsible and ethical in the adoption of AI?

Responsible and ethical AI ensures fairness, transparency, privacy, safety, and accountability in the AI systems. It also ensures end-user / stakeholder trust in AI-driven applications and its widespread enterprise usage. Across Europe and North America, there are regulations around AI that are being debated. In Europe, a draft law is already published. These are primarily around assigning responsibility for any AI system outcomes that are challenged later by a subject whom the outcome impacted. For an organization to stay compliant as well as avoid ethics-related bad press, responsible AI is non-negotiable.

Infosys has developed and adopted a “Responsible AI (RAI) Framework” to overcome Ethical AI challenges and build trustworthy systems powered by Infosys Topaz. These factors are essential for organization to ensure wider adoption of AI models and drive business benefits such as

  • Helps build trust and confidence in the AI models amongst stakeholders such as customers, investors, employees, etc.
  • Ensure Compliance with upcoming AI regulations such as The EU AI Act, US AI Bill of Rights, etc.
  • Gain a competitive edge and achieve sustainable adoption.
  1. How does Infosys leverage deep learning to build trustworthy AI decision-making on the semantic layer?

Deep learning models used to provide AI decision-making on the semantic layer depend on a. the task, b. scenarios planned for, c. data type, d. edge use cases considered and e. terminology definitions that are needed from general intelligence.

Different neural network models may be best for text analysis, image analysis, or voice. Also depending on the industry and edge cases, different data interpretation techniques may be needed. A key role played by the semantic layer is ensuring definitions are consistent and terms are interpreted accurately based on the context. Generative AI or LLMs are leveraged for such functions.

From a trusted perspective, ethical guardrails are built in to minimize misinterpretation that may lead to ethical issues. Also, significant training and testing are included to reduce errors.

Infosys builds and uses Deep Learning based systems under the purview of its Responsible AI Framework. Under this framework, trustworthiness is ensured by instituting the principles of Fairness, Privacy, and Explainability. This applies to the models on the semantic layer as well.

  1. Is there any regulatory approach Infosys is taking to allow corresponding regulators to flexibly govern algorithms used in the semantic layer?

Infosys has developed its own Responsible AI Framework, which accommodates applicable regulatory compliance on various models developed in the Semantic Layer. Secondly, we implement fine-grained access control on a semantic layer to ensure authorized access to the data based on the data subject’s consent and the legitimate business purpose of processing. Lastly, there are audit controls implemented that allow regulators to audit the activities performed on personal & sensitive data.

  1. How is Infosys aiding enterprises to build more equitable AI systems and cultivate a data-driven culture?

The foundation for building a data-driven culture is democratized data, analytics, and AI systems that are RESPONSIBLE BY DESIGN.  This includes managing governance, access, and ethics across systems in ways that monitor for bias and actively mitigate sources of bias and other inequities in the data and AI systems. Industry faces a challenge around human prejudice making its way into the data that algorithms use, and it’s become an issue for many companies looking to incorporate generative AI technology into their systems and products.

At Infosys we have launched Topaz, an AI-first set of services, solutions, and platforms, to amplify the potential of humans, enterprises, and communities. With Topaz, Infosys creates value for and with clients, from unprecedented innovations, pervasive efficiencies, and connected ecosystems. A critical facet of Topaz is that we employ a consistent and mutually reinforcing set of principles, practices, and procedural controls to manage training data provenance, ethics, trust, and social responsibility. We call our practice “Responsible By Design”.  This discipline includes the rigorous application of multiple approaches to manage AI bias:

  • Diversify & pre-process training data to avoid bias that may creep into AI models.
  • Monitor and track bias in model output through conducting regular audits, checking for biased language and patterns of bias, setting up a model observability to have a comprehensive understanding, explanation, diagnosis, etc., and developing guardrails, such as moderator models for bias detection.
  • Provide @prompt engineering training guidelines and training on how to construct acceptable prompts, enable a shield that protects from non-compliant prompts, evaluate gen-ai responses coming from these “acceptable” prompts, adjust guidelines as needed, etc.
  • Apply bias-mitigation practices such as adversarial training, counterfactual data augmentation, re‑sampling, etc.
  • Create a culture of transparency and openness and encourage user feedback.

By applying our Responsible By Design principles, we at Infosys bring AI Ethics front and center to our Data Analytics and AI practice.

  1. Does Infosys Topaz warrant human supervision at all steps of AI processes? How do you see this playing out in the future of doing business?

Various AI models as part of Infosys Topaz warrant a human feedback loop rather than a completely autonomous system. A human feedback loop and a governance infrastructure are key ingredients for the successful adoption of AI in business.

However, to allow scale and automation, several AI systems escalate exceptions for human review based on various thresholds, data sparsity, etc. In that manner, the key advantage of scale leveraging AI while not allowing it to run completely without human supervision is retained. Hence it is not human supervision at all steps but human review by exception and feedback loops based on the outcome.

  1. What are some of the best practices from a practitioner’s perspective to enhancing predictive data analytics with generative AI?

Generative AI models can produce newer data samples similar to the original dataset in terms of statistical properties. This allows businesses to simulate numerous scenarios and predict outcomes for previously unseen situations. By leveraging Generative AI, organizations can uncover hidden patterns, generate synthetic data for testing, and increase the overall accuracy & robustness of their predictive models.

Generative AI can help search for suitable forecasting algorithms, it can suggest algorithms more in use in specific industries, it can explain an existing forecasting code, and so on. E.g., When upgrading a predictive model that has been in use with a firm, it is important to ensure any changes in the software based on learning over years or specific business uniqueness are not lost. Code explains using generative AI can highlight such inputs in existing software so that while using more modern techniques old firm-specific learnings are not missed out.

Leave a Response