Another major university is supporting generative AI use but with serious guardrails

The University of Hong Kong will provide its teachers and students free access to various generative AI tools, along with some rules to follow.
Written by Eileen Yu, Senior Contributing Editor
Person using chat AI on laptop
tolgart/Getty Images

While some schools have curbed the use of generative artificial intelligence (AI), the University of Hong Kong (HKU) is going all in and urging both its teachers and students to embrace the technology. 

The University of Hong Kong is supporting this by giving teachers and students free access to various generative AI tools, including Microsoft Azure OpenAI and OpenAI's ChatGPT and DALL-E. 

Also: How to achieve hyper-personalization using generative AI platforms 

The University of Hong Kong said it had provided free access to ChatGPT over the past months, having introduced a policy for using generative AI in June. It now will expand the range of options to include other tools starting in the new semester, which kicks off in September. 

HKU advocates five key areas of literacy encompassing oral, written, visual, digital communication, and more recently, generative AI. 

Alongside free access for teaching and learning purposes, HKU also will provide other resources, including training and online courses to guide the effective and responsible use of AI tools. 

Under its generative AI policy, the university's teachers are urged to leverage generative AI to optimize student learning, such as fostering analytical thinking, and producing "creative and engaging" activities as well as content customized for individual students.

Also: 4 ways to detect generative AI hype from reality 

Teachers also can use generative AI in assessing students' work, establishing mechanisms to facilitate evaluation "authentically and fairly." "The aim is to ensure the responsible and effective use of generative AI tools and uphold the highest standard of academic integrity," HKU said. 

To mitigate risks from applying the technology for work assessment, it noted that teachers must clearly outline expectations and provide guidance on how the use of generative AI tools in coursework and assignments should be properly declared and cited.

Students also will be incentivized to adopt such tools in their work with the use of alternative assessment methods, such as device-free examinations and student peer assessments.

The university said it would carry out periodic evaluations involving teachers, students, and IT administrators to ensure generative AI is effectively integrated and address any new challenges.

Also: Generative AI and the fourth why: Building trust with your customer

"HKU embraces generative AI and recognizes AI literacy as essential to teaching and learning," said Ian Holliday, who led the taskforce that formulated the generative AI policy paper. "Our goal is to enable our teachers and students to become not only AI- literate, but also leaders in exploiting the vast potential of generative AI for the benefit of mankind."

Holliday now chairs the university's advisory committee to oversee the integration of generative AI in teaching and learning activities. 

HKU said it was directing funds totaling HK$15.7 million ($2.01 million) from the University Grants Committee's Fund for Innovative Technology-in-Education toward its generative AI initiatives. It also is looking to collaborate with universities from other regions and markets to further explore the potential of generative AI. 

In January this year, New York City Schools blocked student and teacher access to ChatGPT on its devices and networks over "negative impacts" on learning as well as concerns about the accuracy of content.

Also: State of IT report: Generative AI will soon go mainstream, say 9 out of 10 IT leaders

Singapore in February said it supported the use of generative AI in schools, but urged students against over-reliance on such tools and to understand the limits of these technologies. 

High trust in generative AI, but more work needed

In a global study released this week, 66% of executives also expressed concerns over the potential for bias and disinformation from generative AI. Almost 80% of respondents, though, had high or significant level of trust that generative AI could be tapped for their organization's future products and operations, according to the IDC survey, which was commissioned by Teradata. The study polled 900 senior executives in Asia, Europe, and the US. 

Some 86% said more governance was necessary to ensure the quality and integrity of generative AI insights. And while 56% said they were under high or significant pressure to tap generative AI within their organization in the next six months to a year, 42% said they had the skills in place to implement such technologies in the same period. 

Another 30% said they were ready to leverage generative AI today, the study found.

Also: This is how generative AI will change the gig economy for the better

A Capgemini Research Institute report last month noted that consumers were enthusiastic about the use of generative AI in their daily activities. Just half, at 51%, said they had explored such tools, with 53% using generative AI to help with their financial planning, noted Capgemini, which based its findings on a survey of 10,000 consumers across 13 markets, including Singapore, Australia, Japan, Germany, France, Sweden, the US, and the UK. 

Some 67% of respondents believed they could benefit from using generative AI for medical diagnoses and advice, including 63% who liked the idea of using the technology for more accurate and efficient drug discovery. 

The report, though, noted an apparent low awareness of the potential risks. Just 27% were concerned about the use of generative AI, with 49% unfazed about its use in creating fake news. Some 34% were worried about its use in phishing attacks, while 33% were concerned about copyright issues.

Editorial standards