[ad_1]
The birth of the PC and the subsequent introduction of the web and internet changed the way we do business. The mainstreaming of AI, thanks to publicly available large language models like ChatGPT and Meta’s Llama is likely another inflection point of similar or even greater magnitude. Yet there are challenges around the adoption of AI, both in our personal lives and within our organisations.
The question is, how can we build AI models that are accurate, without bias and serve the needs of humanity? Is governance required, and if so, how can we make it happen?
Building responsible AI
Nations are beginning to regulate AI, with a recent Presidential Directive in the US, and proposed rules governing AI in the EU. For Australian business leaders, it’s important to be across any pending legislation so that you can embrace the AI opportunity while managing risk & compliance..
|
At Dell Technologies, our guiding principles for AI are underpinned by the Three S’s, that is, AI should be Shared, Secure and Sustainable. Policymakers should also take an approach using these three principles, as they can support the way we drive responsible regulation of AI, while harnessing its immense, innovative potential for the betterment of all.
Let’s take a deeper dive into exactly what those Three S’s cover:
Shared represents an integrated, multi-sector and global approach built in alignment with existing tech policies and compliance regulations such as those governing privacy.
Secure means focusing on security and trust at every level — from infrastructure to the output of machine learning models. Always ensuring AI remains a force for good but is also protected from threats and treated like the extremely high-value asset it is.
Sustainable demonstrates an opportunity to harness AI while protecting the environment, minimising emissions and prioritising renewable energy sources. AI represents the most intensive and demanding technology we’ve ever seen, and we must invest as much in making it sustainable as in creating AI itself.
Some regulations will be easy to implement, but others, requiring jurisdictional cooperation, will be trickier. Regulation will need balance: balance between over-regulation or implementing rules that are just plain wrong and not implementing rules that take AI’s potential power and effects into consideration.
But regulation is still needed, particularly around transparency and disclosure, because that is what will drive trust for the people and organisations using this new technology.
Dealing with bias and hallucinations
One of the issues with generative AI is that, if it does not know the answer to a prompt, it can make things up. These false responses are known in the industry as hallucinations. GenAI can also, depending on the training set used, have problems with promoting bias.
For example, it’s important to have a diverse training set, as well as test the responses using other AI and human systems. For corporates wanting to explore using genAI in their organisations, the best path forward is to train the AI on in-house data, rather than relying on a public model. If, for example, you have removed non-inclusive language from your training data, then bias will not be a problem, because the AI will never have seen it in the first place.
Dealing with hallucinations is a little more complex, but the best path forward is what’s known as an “adversarial system,” where the results of one AI are cross referenced against the results from another AI system to check for inaccuracies or biased responses.
The reality is bias and hallucinations are both a technical and data problem, and you need to create and train an AI system compatible with both your company’s culture and values.
Getting started with AI in your organisation
There’s no argument AI is going to have a massive impact on the way we do business. The question is, how to start?
The best way forward is to begin with small-scale experiments trained on non-business-critical data. It’s more important to create quick wins on your few AI projects and learn rather than tackling the most AI project with the largest ROI – that also may be the most complex to implement. You can then take what you have learned from these trials and incorporate it into a wider-scale rollout when the time is right for your organisation.
At this point, you need to focus on the areas where AI will be a differentiator for your company, and those areas will, obviously, be different for each organisation. For example, if HR isn’t a core differentiator for your business, then it makes sense to wait for your HR software vendor to build AI into its products, rather than trying to do it yourself.
But if you’re in manufacturing, on the other hand, then focusing AI on how you design, test and build new products will create a competitive advantage.
The idea is to bring AI to your data, as research indicates 50% of AI deployments will be on-premises, mainly due to data gravity – that is, it’s where the data lives – or for governance reasons, with legislation requiring it to be there.
The final step is to know you don’t have to build AI yourself. You can use external or open-source models to create your in-house AI and then train it on your corporate data. It’s not necessary to reinvent the wheel with AI, given the availability of pre-built, pre-validated models from various vendors.
Corporate leaders and boards in Australia should bring together cross functional teams, with wide representation from HR, legal, IT and business units, that can balance potential use case upsides, with risk & ROI and work to implement responsible AI policies to help them navigate and show leadership in this space. Dell has recently appointed Jeff Boudreau as Dell’s chief AI officer. Is your organisation thinking about the potential AI brings as a board level responsibility?
And for business wanting to get onboard with the AI revolution, it’s important to know you don’t need to do the heavy, expensive lifting yourself. Partner with others, use existing models and starting with small experiments are all key to getting underway with AI.
[ad_2]
Source link