Week 5:
Safe and Secure
Welcome to BLAM! Week 5!
Now that you’ve learned about all the cool things you can do with AI in the workplace, it’s time to add the essential layer of compliance. This week we’ll learn about ethical considerations, biases, and responsible use of AI. Humans understand context much better, which is why all content created by AI must be reviewed and vetted. By the end of this week, you’ll be equipped to use AI responsibly and create a team AI ethics guide.
Have you claimed your AI Badge yet?
If not, make sure to grab your Bucketlist AI Adventurer Achievement yet? You can do that by heading over to bucketlist, and selecting the Achievements tab and claim your BLAM! AI Adventurer Achievement today.
Go on - you deserve it!
101: AI Beginner to Intermediate
Learning Resources
Reading
Berkeley Lab policies to bookmark:
Berkeley Lab’s Research Office of Compliance Generative AI Tools in Research which includes guidelines for processing research data, reporting research outputs, and peer review.
The IT Division’s Guidance on using Generative AI tools, addressing key AI issues that often come up, such as the security of sensitive information and how to avoid infringing on external parties’ copyrights and other intellectual property.
Check out the IT Policy “Cheat Sheet” with details on acceptable uses on a model-by-model basis: AI Tools Security Levels
Bonus resource! UC Berkeley has also developed policies, advisories, and guidelines for the use of AI.
Harvard Business Review, How to Implement AI - Responsibly
IBM, What are AI Hallucinations discusses not only what AI hallucinations are but the implications as well as way to prevent them.
Videos
Google Essentials Practice Using AI Responsibly,a great backgrounder on the importance of AI ethics and a great reminder that it’s a tool, but not perfect.
IBM Technology’s Why Large Language Models Hallucinate is a short video that breaks down what a hallucination is, why it happens, and how to minimize them in your prompts.
Try It
Check out Perplexity Pro - You can get one year for free when you sign up with a .gov email account. This is not a negotiated contract, so please don't share anything confidential or proprietary here. They also just launched Perplexity DeepResearch, which you can read about on their blog.
New Tools
While there are many new AI hallucination tools becoming available, they still have a ways to go. but a nifty one you should check out is Grammarly’s new AI detection tool.
Week 5 Challenge: Break it
Get ChatGPT to confidently generate a false or misleading response—a classic AI hallucination!
To do this follow these steps:
Pick a tricky topic – Choose something niche, obscure, or ambiguous.
Craft a sneaky prompt – Try to phrase your question in a way that could confuse or mislead ChatGPT. (Example: “What are the names of the moons discovered orbiting Earth in 2023?”)
Screenshot your best hallucination –an answer that sounds real but is totally false - and share on the BLAM Week 5 Chat thread
Things that can help get that hallucination include: Ask about fictional events as if they’re real. Mixing real and fake details to make it harder for ChatGPT to verify. You can also request obscure scientific discoveries or historical facts that don’t exist. Or see if you can get it to invent fake citations or sources (but be ethical—no spreading misinformation!)
Share how it did - the good and the bad - in the BLAM Week 5 101 Chat Thread (Don't forget to read about how to use the BLAM Chat Rooms first).
While you're there, peruse other people's submissions and upvote them by adding reactions.
Events
NOTE: All past webinar recordings are available here.
Tuesday February 18th 2-3pm - Presented by Saroj Adhikari along with our coder guru, Tim Fong, this week's coder webinar will introduce a python profiler called scalene that can generate AI-powered optimization proposals.
Wednesday February 26th 11AM to Noon Coders Office Hours - Stop by and get answers to your questions about developing with LLMs.
Wednesday February 26th Noon - 1PM- Webinar featuring Berkeley Lab experts in policy, research, and legal to discuss how to use AI responsibly.
Go Deeper
Are your queries destroying the planet? It's complicated:
Andreeson Horowitz, Welcome to LLMflation - LLM Inference Cost is Going Down Fast
Hard Fork - AI's Environmental Impact - starts at ~31:00
201: AI Intermediate to Advanced
Learning Resources
Review Berkeley Lab policies in the 101 Learning Resources section.
Google Research Blog, Accelerating Scientific Breakthroughs with an AI Co-Scientist.
Try It
Check out Google DeepMind’s latest tool, AI Studio, which is a browser-based platform for experimenting with generative AI models. It's used to prototype and refine AI-powered solutions.
Week 5 Challenge - Intermediate - Scripting
Use Google AI Studio to create a custom prompt with a system message that detects if the input text is AI-generated and outputs a probability “high”, “medium” or “low”. Compare results to other AI detectors online, such as Grammarly.
CODERS
Learning Resources
Try It
DLAB's Python GPT Fundamentals class gets you hands on with GPT in Python. Register now for next week's course.
Week 5 Challenge - Coders
Based on this week's webinar introduction to a python profiler called scalene that can generate AI-powered optimization proposals, your challenge is to run the profiler on your own code that may benefit from optimization.
Remember to get access to the essential Coder webinar and office hours by subscribing to the BLAM calendar.