AI • 4 mins read
4 mins read
Related topics
The TIC Council held its TIC Summit for the conformity assessment sector (comprising organisations conducting testing, inspection, certification, validation and verification services) in Brussels on 14 May, which this year aimed to answer the question “What does it take to build trust in AI?”
Questions around the development and deployment of emerging technologies, including Artificial Intelligence (AI), have been prominent within the international conformity assessment community for some time. Previous events and discussions have indicated a common consensus around the need for a strong international collaborative approach to facilitate ethical AI governance, allowing innovation to thrive, whilst managing risk and preventing harm.
The TIC Summit, opened by Hanane Taidi, Director of the TIC Council, hosted a wide range of international organisations involved in AI, including voices from academic, industry and regulatory viewpoints.
The event was also the setting for interesting panel discussions that sought to explore the many opportunities for the TIC sector to demonstrate leadership, using standards for the safe and ethical deployment of AI technology.
Bridging the trust gap
During the session titled ‘To AI or not to AI’, panellists explored the way the global quality infrastructure can bridge the “trust gap” that exists between emerging technologies and adopters, which is so often the main impediment to widespread uptake. The panel agreed that AI, if adequately underpinned by standards and accredited conformity assessment, can be a force for good, with particularly strong opportunities in the fields of education and healthcare.
International collaboration was once again highlighted as an essential component for success, with initiatives such as the Walbrook AI Accord serving as an example of the healthy appetite for global partnership that already exists, enabling a collaborative approach to learning about, development of and governance of AI.
Act now to harness momentum
Lord Mayor of London, Michael Mainelli, addressed the summit via video link, underlining the critical nature of the juncture that the global quality infrastructure finds itself in when it comes to the opportunities of AI. The Lord Mayor echoed the sentiments of the AI Summit at Mansion House, that now is the time to harness the momentum that has already been achieved in this area, such as the work that the City of London Corporation has done with UKAS and the TIC Council on the Walbrook Accord.
Among the many other sessions held on the day, Jennifer Baker moderated a ‘fireside chat that examined the collective responsibilities emerging from the era of AI. This session highlighted the growing opportunity that exists for the use of standards and accreditation to bring clarity to a confused global regulatory space. The panel also indicated a strong appetite for the incorporation of third-party assessment within any future AI related regulation, as an important safeguard for the protection of consumer interests. The discussion indicated a unanimous feeling that now is the time to ensure policymakers, the quality infrastructure, along with industry across all sectors, works collaboratively to ensure appropriate and robust safeguards are in place before there is any risk of impact to end-users.
A golden opportunity
UKAS CEO, Matt Gantley, was pleased to attend the summit and participated in a panel discussion titled ‘Is AI governable, is AI certifiable?’ With regard the global quality infrastructure’s role in the development of AI assurance mechanisms, Matt stated “This is our golden opportunity to be ahead of the curve. We must collaborate on quality infrastructure in all its forms, develop skills and develop standards.”
Panellists agreed that the international standards community provides an existing common framework for policymakers to utilise, which can be underpinned by accredited conformity assessment to scale up AI assurance safeguards.
Next steps: the Walbrook AI Accord
The Walbrook AI Accord is an important element in the establishment of ethical AI standards, and advocates standards for comprehensive firm-wide certification. The initiative is international in perspective and places emphasis on the adoption of existing ISO standards to achieve this, rather than creating new ones.
The Walbrook Accord has three main intended outcomes:
- Advocating for the adoption of quality infrastructure for AI assurance.
- Developing assurance standards and methodologies.
- Facilitating training and skill development for AI assurance professionals.
The Accord invites collaboration on a voluntary, non-binding basis, serving as a platform to unify and strengthen collective efforts over time. Development of the Accord has been well supported by international organisations such as ISO and IAF and the initiative is inclusive, with membership open to all.
A Statement of Intent for the Walbrook Accord can be found on the Lord Mayor’s website.