General Information
Right now, there are two specific pilots involving both administrative operations as well as selected faculty and staff. Additional offerings will be released to students in the Fall. Those pilots are for Google Gemini and Microsoft CoPilot. OpenAI and its ChatGPT product will be available soon. Information from these pilots is being used to tailor campus-wide rollouts. Available GenAI Tools can be found on UCLA’s Generative AI website.
An Artificial Intelligence (AI) prompt is a conversational way of interacting with a modern generative AI model. You type as you would speak in natural language, sharing context, questions, facts, and providing information to the AI model. Then in return, you receive a natural language response. Examples vary by area, but more generalized tools like ChatGPT, Gemini, and Copilot can answer questions from prompts or perform tasks based on a variety of subjects and background information.
As an example, if you were a digital artist, you could type something such as:
“Generate me an image of a cat in the style of Vincent Van Gogh with at least three chairs in the background. The cat should be jumping on to one of the chairs, knocking the other two down.”
Prompts can include information about visual style, imagery, detail, etc.
A prompt for someone trying to research a term paper in world history may be something such as:
“Generate me an outline for a term paper on civil rights in the 1960s in the United States. The outline should include background on the state of politics, society and culture during those times. I need to make three coherent supporting arguments that deal with the role of law enforcement and civil rights, along with the federal government’s role leading to the Civil Rights Act of 1964, and finally how that time period is influencing modern times (post pandemic 2023 and beyond).”
Currently, these tools are not present in a single repository, and are being procured on a license-by-license basis. We encourage those interested to get in touch with ITS and also browse available GenAI tools.
Microsoft's Copilot aims to base all its responses on reliable sources, but AI can make mistakes and third-party content on the internet may not always be accurate or reliable. Copilot will sometimes misrepresent the information it finds. This is why Copilot is transparent and shows the sources of information behind its answers. Always check the sources before making decisions or taking actions based on Copilot's responses. Copilot does not replace human judgment.
- Students should remember that the UCLA Student Conduct Code applies to GenAI, and states that “Unless otherwise specified by the faculty member, all submissions…must either be the Student’s own work, or must clearly acknowledge the source.” Students should consult with their instructors about the acceptable use of GenAI for each course.
This is a complex topic, involving understanding of the source data and intellectual property rights and licenses surrounding the training of an AI model. In addition, the U.S. Federal District Court decided in 2023 that AI output does not constitute copyrightable work. In a university setting, faculty and specific course instructors typically own that judgment call, and departments and specific instructors may define or tailor specific considerations on top of that. At UCLA, the Academic Senate provides GenAI guidance and resources for Teaching and Learning to instructors. Additionally, students should remember that the UCLA Student Conduct Code applies to GenAI, and states that “Unless otherwise specified by the faculty member, all submissions…must either be the Student’s own work, or must clearly acknowledge the source.”
First, understand that the prompts you type using consumer AI models such as ChatGPT, Gemini and CoPilot help train the model and can be used for marketing purposes. Sensitive information such as proprietary, business-discreet, or employee-sensitive information should not be provided. See ITS’s and the University Chief Information Security Officer’s (CISO) recommendations and guiding principles for responsible use of AI. Currently, UCLA employees have access to Copilot using their UCLA Logon credentials, which provides commercial data protections that protect UCLA’s information (unlike consumer versions, your prompts will not be used to train models nor for marketing purposes).
University of California Office of the President’s (UCOP) AI Working group has developed eight areas of responsible AI. These include transparency indicating what data was used to train the model; fairness and accuracy in the model’s predictions (and confidence in those predictions), and attention to human values amongst other areas. Read more about these principles here.
Finally, be cautious of where and how you integrate AI into tasks and business processes. You shouldn’t use AI to complete your annual training as a faculty member, administrative employee, or student, nor should you be using AI to evaluate your peers, friends, or colleagues based on private information. When in doubt, if you are concerned about a human doing it, you should also be concerned about AI doing it!
Further reading:
UCLA GenAI Guidance Site
UC Responsible AI Use Recommendation Article
Responsible AI Final Report (UC)
As with other AI tools, there are limits to how much turns (prompts and responses) you can do per chat conversation and per day. If you reach those limits, you will be notified with the experience. For instance, Copilot has a limit of 30 prompts/questions within each thread.
AI models are trained on a multitude of languages. Google’s Gemini, Microsoft’s CoPilot and ChatGPT enterprise from OpenAI are all multilingual. Many language models can convert from English to hundreds of languages and vice versa. This is based on neural machine translation work, pioneered by researchers including our own Chief Data and Artificial Intelligence Officer (CDAIO) and his efforts to supervise the construction of 600:1 (many to one English machine translation model).
Use cases include looking at an image in another language and translating that to your own language; taking existing food service and guest and location/access descriptions in English and translating them to the specific user language of choice with high fidelity and accuracy.
The common denominator in the modern AI revolution post ChatGPT/November 2022 is ease of use. You type to these AI models and tools in natural language, and you iteratively have a conversation with them, with the reply coming back in a natural language. You can ask the model to soften its tone; to be inclusive in its language, to consider ethical approaches to its responses, and so on, just like you would a human.
AI tools can be costly for the training portions, and in some cases like LLMs cost is dependent on how it is used. You pay for the size of the questions and number of iterations asked, along with cost to initially train (amortized) and run the model (inference) each time. ITS is working with Purchasing on methods for departments to procure Gemini and CoPilot. Additionally, our next pilot for OpenAI will provide input into costing, and tools and services needed for departments to run these AI capabilities.
In general, no; however, there are tasks such as visual recognition, audio speech to text, natural language processing, and other tasks for which AI is generally considered better than (most) humans. AI is dependent on the data it is fed and the prompts it is given (in the case of Large Language Models (LLMs)) and still depends on humans to help it interact with the environment. So, in general, it will not replace human decision-making any time soon.
Google does not provide commercial data security and privacy protections for Gemini.
We are reviewing several leading GenAI tools. Please, visit Available GenAI Tools for up-to-date information on reviews and target availability dates.
Privacy and Security
One of the key recommendations is to realize that the public versions of the AI tools make no safeguards about data, including and most importantly the prompts typed into these models. They will be used to train future versions of the model, so the classifications and sensitivities of data related to UCOP’s recommendations should be consulted, and care should be taken not to use these public tools with sensitive information.
Always refer to the Tools Availability Matrix for up-to-date guidance on approved data types for each GenAI tool. Note that currently, these tools have not been approved for use with P3 or P4 protected or restricted data such as financial, student education records (FERPA), patient data (HIPAA) and other protected data. Please see the UC Protection Level Classification Guide for more information about data classification levels.
Your prompts and responses are de-identified and not retained when using Microsoft's Copilot. Because of this, UCLA instructors and IT administrators cannot see them.
Microsoft's Copilot is an AI-powered web chat, meaning its responses are based only on what you input as the prompt and what the AI can pull from the web. It is not able to see or use company or device data when crafting a response. If you use Microsoft's Copilot in the Microsoft Edge sidebar, it can use the context of the webpage you have open, but you can always turn this off. It will not see or use other browser data like browser history.
GenAI tools licensed under “consumer terms” may use your data for training models and marketing purposes. At UCLA, we review GenAI tools and only enable those with commercial data security and privacy protections so that your data and university data stays safe and it is not shared with GenAI vendors.
Data used and processed by AI using public tools like ChatGPT, Gemini and CoPilot instead of the institutionally purchased internal versions of these tools should not be considered secure at all. Any prompt input into the public tools trains those tools and will be part of the public versions of them.
You can visit the UCLA GenAI Guidance site to review the availability of GenAI tools and approved data classification levels. Please consider reviewing UCOP’s recommended data classifications and please partner with the CDAIO and CISO offices in ITS to discuss your use cases.
Company data and intellectual property are crucial to our competitive advantage. The problem with using generative AI without commercial data protection is that prompts or content you generate can be exposed to the public as part of another answer. That's where generative AI with commercial data protection comes in: What you put in and get out is not used to train the underlying AI model so company data stays safe.
This is something that we could consider doing through the work of the CDAIO and institutional administrative operations. Our new CDAIO is helping to co-chair the California Lawyer’s Association AI Task Force, and additionally contributed to the Biden Executive Order on AI at the federal level in his prior role as Chief Technology and Innovation Officer (CTIO) at NASA JPL. First, UCLA needs to focus on campuswide AI literacy initiatives and implementing the UCOP 2021 AI Working Group Recommendations include the creation of an AI Council; helping to streamline the AI technology procurement process, implementing AI in a responsible and ethical manner and creating a campus wide AI inventory.
In the Workplace
AI can co-exist with humans by speeding up repetitive and monotonous tasks by helping to achieve greater throughput and output through robotic process automation. It can also help with tasks such as visual recognition and object recognition more efficiently and accurately than a human can do, as well as speech detection and natural language understanding and translation.
Part of implementing AI in an ethical manner includes upskilling and identifying paths for jobs that may be impacted by automation to ensure ways to leverage human assistance in overseeing AI that takes on the monotonous and repetitive tasks. This allows humans across many functions to focus on more creative, complex, and meaningful tasks instead of monotonous tasks.
UCLA ITS is working on a “playlist” of recommended AI coursework already available on platforms like LinkedIn Learning to help educate our workforce in AI literacy. This will include topics such as machine learning, classification, regression, and higher-level topics like neural networks and use cases. There are many classes available already, so having a playlist will be a way of navigating through the class deluge to a set of important skills for our workforce.
AI is already being integrated into tools like Zoom, Teams and Google Meet in the form of summaries of meetings, visual recognition and in-speech recognition for accessibility including closed captions and other areas. ITS and its collaboration services, including Digital Spaces along with our Digital Foundry and CDAIO, can help navigate these tools and capabilities.
AI can be used in a number of different ways for analyzing guest feedback. First, it is recommended that internal versions such as Gemini and CoPilot be used rather than the publicly available versions of the software since the feedback potentially contains business-sensitive information. Soon, ChatGPT Enterprise will be available for this context.
Once given the guest feedback in the form of prompts, and prompt context, that feedback could be used to ideate and tailor both improvements to existing services, and potentially new services based on the feedback, knowledge of the university and other pertinent information, as well as information input into the prompt to guide the AI agent in its assistance.
Generally available tools such as ChatGPT, Gemini and CoPilot utilize publicly available data and do not provide university-specific responses. Although generic, the models trained on this data are extremely powerful and usable in a university setting; however, they will lack the specific internal context and data that would be necessary to fully realize their potential. Some of the tools are available in enterprise, internal form, such as CoPilot, Gemini and (soon) ChatGPT Enterprise. As opposed to the publicly available tools, these internal tools can be used to summarize and analyze UCLA documents or information, but the model will NOT be trained on it. In the coming months, ITS alongside campus stakeholders will partner on pilots for university-specific content.
AI models such as DALL-E 3 (available from ChatGPT Enterprise) as well as Google’s Gemini have the ability to generate imagery media as output. Prompts could be tailored to create AI generative assisted room tours, food photography and other media assets for use in improving visual experiences for potential guests. Follow the same prompt guidance provided earlier.
AI models, including modern versions of large language models (LLMs) like ChatGPT, Gemini and CoPilot, can be trained using new data. This can be done at the prompt engineering level or in the creation of a new foundation model trained offline and statically. If there is consideration of including work related injuries, or high-risk behaviors, we need to leverage data classification and sensitivity levels including work done by our Office of the Chief Information Security Officer (CISO) and understand the business sensitivity of the data being shared. This includes whether it’s business proprietary, or sensitive, and other levels of classification such as those recommended by UCOP.
It is safe to assume that AI will be a standard tool in every campus setting within the next 2-4 years. In 2023, 4 in 10 teachers surveyed were using AI in their classrooms and this is only predicted to increase.
Copilot
As a campus employee, browse to Microsoft's Copilot page, and use your UCLA logon credentials to authenticate (or your school / department provided logon credentials for your Office 365 environment). For additional details and step-by-step guide to use your UCLA Logon credentials, visit the log into O365 knowledge article.
As a student, Copilot is not available yet. Instructions on how to access it will be published as soon as the service becomes available.
You’ll know you’re using Microsoft Copilot with commercial data protection based on various visual signals:
A “protected” image will appear next to your profile icon.
- “Your personal and company data are protected in this chat” will appear above the chat box.
Yes, go to edge://settings/sidebar and select “Copilot” under App and notification settings. From the Copilot settings page, turn off the “Allow access to any website or PDF” toggle.
Schools and departments can purchase M365 Copilot licenses by submitting a license purchase request through Software Central.
As a student, M365 Copilot is not available yet. Instructions on how to purchase it will be published when the service becomes available.
More Information
Email us at help@it.ucla.edu or call (310) 267-HELP (4357).
Besides the resources in this website, see also our ITS website. Please contact Chris Mattmann, CDAIO, for more information.