AI didn't create new risks... it multiplied old ones

In the global rush to embrace artificial intelligence (AI), consultancies such as BDO in Australia are warning against the trap of overstating the risks or treating them as entirely new phenomena. While AI has become a headline in business, research and education, the risks it poses are not new, but rather an extension of old risks that organizations have always faced, only now they are faster and more widespread.

AI, according to the analysis, does not change the nature of threats so much as amplify them and make their effects more rapid and widespread, with its ability to process massive amounts of data in a short time and replicate decisions faster than human capabilities.

The report notes that universities in particular are at the center of the storm, as they possess large amounts and variety of sensitive data that make them a tempting target for cyber attacks, such as student records, research data linked to intellectual property or defense partnerships, payroll and finance systems, as well as critical infrastructure.

The advent of AI hasn't changed the nature of these risks, but it has made any governance or data protection loophole exploitable faster and deeper. For example, unreliable data can lead to biased models, an issue that has long been recognized; but when these models are used at scale to make quick decisions, the effects of the flaw are multiplied.

The report emphasizes that the real danger lies not in AI itself, but in the speed of its operation, which is the most decisive factor in expanding the impact of traditional risks. Whereas previous technical or managerial issues used to emerge over months or years, AI can accelerate them in minutes.

Therefore, the absence of privacy, access management, or data quality controls becomes a multiplier, not a creator, of harm. This "acceleration" makes it necessary to reconsider data governance not as an additional measure, but as a prerequisite for the integrity of AI-based models and systems.

The solution is not to create new structures or additional committees, but to integrate AI into the existing governance framework, update academic privacy, use and integrity policies to include AI technologies, and adopt approved regulatory frameworks such as the EU AI Act, NIST AI RMF, and ISO 42001.

In scientific research, the data used to train models must be treated with the same degree of caution and scrutiny as sensitive data in medical or defense research, to ensure that it is not leaked or misused.

The report concludes that university and data center leaders should not get caught up in the hype surrounding AI, nor should they overestimate or underestimate the risks. The risks are old and well-known, centered around data protection, access management, and digital security culture.

What's new is the acceleration and breadth of the consequences. Experts emphasize that the real question that should be asked in boardrooms is not: "What new risks has AI brought?", but rather: "Are our systems prepared to deal with the old risks as they multiply?"

BDO concludes its analysis by noting that AI has not rewritten the rules of risk management, only raised the level of challenge, and that university institutions that want to confidently adopt AI first need to strengthen their foundations in data governance and cybersecurity, before moving on to more complex applications. It is no longer a matter of fearing the unknown, but of being seriously prepared to deal with what is known ... but more quickly.

comments