iTWire – The risks and rewards of ChatGPT in the modern business environment

[ad_1]

GUEST OPINION: ChatGPT continues to lead the news cycle and increase in popularity, with new applications and uses seemingly uncovered each day for this innovative platform.

However, as interesting as this solution is, and as many efficiencies as it is already providing to modern businesses, it’s not without its risks. As with any new technology on the market, especially one underpinned by artificial intelligence (AI) and machine learning (ML), the potential for it to introduce cybersecurity risks to businesses is great.

Jason Whyte, general manager for Pacific, Trustwave, said, “It’s undeniable that ChatGPT, and other similar, competing solutions, have captured the attention of the modern business world as much as consumers. There are so many potential applications that could benefit businesses and drive efficiencies, empowering users to automate processes and pivot their attention to other high-value business tasks.

“However, with great reward comes great risk, especially in a digital society. Businesses must be cognizant of the potential risks they are exposed to using ChatGPT, and other software applications, to understand how to best protect their data and their business.”
Two of the biggest risks facing businesses using ChatGPT are privacy concerns and the potential for malicious threat actors to weaponise the platform.

1. Privacy: while ChatGPT can’t directly access the internet, it can access any private, confidential, or proprietary information input into it. Every piece of data input into ChatGPT helps to create a feedback loop that trains the software, which means that any private information used in the platform is no longer private.

2. Weaponising: while ChatGPT is still isolated and can’t be used directly in an attack by cybercriminals, they have as much access to it as anyone else. As such, malicious threat actors are already using ChatGPT to perform tasks such as ensuring malware is undetectable or to fill skills gaps in criminal organisations, such as coding, to improve the output.

Jason Whyte said, “Already, there have been instances of data breaches linked to the platform, exposing intellectual property for major brands. Businesses must be aware of the risk of sharing private and confidential information that could be leaked. Additionally, cybercriminals use the application in the same way as other organisations, which is bad news for businesses if or when they’re exposed to malware written by an intelligent platform.

“However, despite the risks, as ChatGPT continues to evolve, it will also continue to deliver great benefits to organisations. ChatGPT is a powerful tool that can help organisations rapidly review thousands of lines of code or alerts to identify when IT and security personnel need to be involved. From a security operations centre perspective, this will provide significant benefits for businesses in the future as the platform continues to mature.

“Beyond this, if and when isolated instances of ChatGPT become available, businesses will likely leap at the opportunity to leverage the platform confined within their organisations to continue to automate and streamline processes. While this will likely introduce its own security risks, there will continue to be benefits that power the organisations of the future.”

[ad_2]

Source link