Transformative AI solutions are already being used by many organisations and employees across sectors to address some of the biggest challenges their businesses face today. In this article, we explore how responsible AI can drive efficiency, mitigate risks, and unlock new opportunities.
Author: Prathiba Krishna, AI and Ethics Lead at SAS
How to govern the reliability and trustworthiness of AI
The cost of doing business is rising, with UK businesses experiencing a 24% increase in the average cost of goods and services over the last two years. There is growing pressure on organisations to be more productive and efficient to ensure ROI, and leaders are looking for ways they can use AI to automate processes and streamline their operations – freeing their workers up to spend more time on more valuable tasks to improve services.
Last year, we found that organisations embracing GenAI are seeing significant benefits. Based on a global survey of 1,600 organisations across a range of sectors, 89% told us that using GenAI (also known as generative AI) had improved employees’ experience and satisfaction, 82% said they had made savings on operational costs and 82% stated that customer retention was higher.
And with the recent announcement of the UK government’s AI Opportunities Action Plan, we’re expecting to see this not just trigger public sector investment in AI, but encourage more private sector investment too.
Recent data from Workday revealed productivity levels were improved with the use of AI through workers having an extra 92 days a year to focus on responsibilities that provide more value – showing the true power of AI and the potential gain for organisations.
Despite this, concerns around AI remain and Workday found that 93% of employees and business leaders still had trust-related worries with using the technology. They remain concerned about transparency and responsible AI practices, wanting assurance that it is trustworthy and effective.
We found this in our research too. Three-quarters of respondents said they were concerned about data privacy (76%) and security (75%) when GenAI was used in their organisation. So as adoption grows, there is more that can be done to prepare organisations for complying with current and upcoming regulations, and ensuring a comprehensive governance framework is in place.
Managing the risks
The advancement of any technology brings with it significant risks. While companies using AI want to be seen to be innovative and to help their workforce improve their productivity, not all understand how to ensure that AI is being used across their organisation responsibly and safely. A balance has to be struck – using AI responsibly and not restricting innovation.
Again, our research confirmed this with nine in 10 senior tech decision makers (93%) admitting that they do not fully understand GenAI or its potential impact on business processes. We found that many organisations were not fully prepared to comply with regulations and did not have GenAI governance in place or a means to monitor the technology.
On top of that, other risks such as data privacy concerns, biases in AI models and cybersecurity vulnerability need to be considered.
So, while the adoption of AI is positively encouraged and is now being pushed by the UK government, organisations should understand how they can proactively prepare themselves in this era of exciting change.
The adoption of trustworthy principles can be ranked and prioritised according to the organisation’s data and AI maturity. To keep trustworthy AI at the centre of innovation, at SAS we have six guiding principles – human centricity, transparency, inclusivity, accountability, privacy security and robustness.
Setting clear standards to follow that are relevant to your industry is a good starting point for harnessing trustworthy AI and incorporating core principles into the design of AI models (i.e. diverse perspectives and datasets) which can lead to more positive outcomes.
Human oversight will be vital to the success of any AI deployment. From our experience, having implemented thousands of AI solutions for the public and private sectors, human intervention at key stages – from solution design to ongoing review – is critical for maintaining model performance and ensuring responsible use. All models degrade over time, so continuous monitoring and adaptation from software testers is necessary to sustain meaningful insights.
Maintaining commercial success
Trustworthy AI can ensure safety and reliability by the above-mentioned guiding principles. It’s something that needs to be planned before the first line of code and needs to continue throughout the AI lifecycle as a continuous process, being perfect at every stage with human intervention.
AI processes should be able to combine model outcomes and business rules to embed social aspects of the data. It’s important to build fail-safes and adopt redundancy measures to ensure critical decisions are reviewed or overridden by human operators when necessary.
Having capabilities in your AI platform is necessary, but technology is not enough – it also takes a comprehensive governance approach, involving people and solid processes.
Governance around AI can help to create a set of rules and guidelines for companies to ensure they are using AI responsibly and can reap the benefits as a result. Organisations demonstrating AI governance are also more likely to be commercially successful as it can support them to:
- Unlock business value – by increasing the productivity of the workforce through trusted and distributed decision-making.
- Strengthen trust – not just from customers and employees, but also by building trust through greater accountability in data usage.
- Win and keep talent – ensuring they attract the best talent who expect to use AI in their role and prioritise responsible innovation practices at work.
- Drive competitive advantage – having an edge over competitors and market agility allows companies to take a proactive approach to compliance.
- Enhance brand value – addressing the potential societal and environmental impacts of AI can only strengthen an organisation’s reputation and brand.
Transparency is key
It’s important for businesses to guide AI development with transparency to maintain human accountability over the technology. Transparency is crucial for building trust and demystifying AI, helping to establish data lineage and explaining to customers and regulators how model predictions and decisions are made.
Continuous improvement processes, such as updating systems when new data is available, retraining models to become more relevant and adapting to environmental conditions, are necessary when it comes to implementing trustworthy AI.
Ultimately, trustworthy AI is not just a regulatory obligation; it’s a strategic imperative that can drive long-term success and customer loyalty. And so, while we are starting to see legislation implemented in various countries and governments putting their own guidance into place, it’s still beneficial for businesses and their AI leaders to consider their own set of rules for employees to work by.
In your role, having these strategies helps to protect the workforce and truly unlock the transformative power of AI, ensuring a sustainable and customer-centric approach that safeguards their reputation as well as the interests of their customers and communities.
For event sponsorship enquiries, don’t hesitate to get in touch with oliver.toke@31media.co.uk
For media enquiries, please contact vaishnavi.nashte@31media.co.uk