Week 2:
Prompt Genius

Welcome to BLAM! Week 2!

This week, we're focusing on the most important skill you can develop for working with AI - prompting.  Building your skills to develop and rework prompts can mean the difference between frustration and success.  

Wait - before we get started this week... 

4 quick things.  First, would you please give us quick feedback on how BLAM! is going for you? You can do this in the window on the right.   Just give us a 1-5 rating on last week's work so we can work to improve.

Second, if you're a researcher or engineer, consider joining the Lab's LLM Group - this group has regular meetings and ongoing email discussion about the development and application of LLMs for science use cases. 

Third, don't forget to add the BLAM! Calendar to your google calendar so you don't miss any of the events. 

Finally, would you like email reminders about BLAM! during the week?   Just click here and subscribe. 

101: AI Beginner to Intermediate

Learning Resources

Google's 9 hour prompt engineering class in 20 minutes is highly recommended.   Tina Huang distills the much longer Coursera course into an easily digestible 20 minute video. 

Jeff Su's Master the Perfect ChatGPT Prompt Formula (in just 8 minutes)! is an excellent introduction to one particular framework.   If you're short on time, this is the one to watch. 

Google's What is Prompt Engineering document provides an introduction to different frameworks and techniques, going a bit deeper than the videos above.  

We've chosen three other readings that all have good material on how to craft effective prompts.  While they all have slightly different content, no need to read all three:

Writing ChatGPT Prompts That Get Results (with Examples) - by Grammarly - This one is particularly good at laying out types of prompting. 

How to Write ChatGPT Prompts: Your 2025 Guide - by Coursera - Easy to follow with plenty of examples

Google's Prompting Guide 101 - this one has detailed examples about how to get the most out of prompts in various situations.   No need to read the entirety of this 68 page document, but definitely worth skimming through for examples you might find useful.  

And finally, you might enjoy Google's AI Essentials Module 3 (we recommended 1 and 2 last week) which is all about prompting. 

Try It

Try It:  Login to Cborg and try some of the prompting techniques you learned above.   Do different models respond differently to different techniques?

Try It: Explore Reddit's ChatGPTPromptGenius subreddit - see how other people are creating successful prompts and try some yourself. 

New Tools

New! - Gemini for Google Workspace has been rolling out to all LBL Staff over the last week.  If you don't have it yet, you will soon.   Read about it here.   We'll have a webinar later in BLAM! devoted just to these features. 

New! - We've got reasoning models!   OpenAI's -o1 and -3mini are both available now in CBorg.   Check out the power of reasoning models (but use them judiciously for harder problems - they're expensive!).

Week 2 Challenge: Advanced Prompts

Apply what you learned to a real problem you face at the Lab and work on refining your prompts till you're happy with the results.  Share your prompt and solution in the BLAM Week 2 101 Chat Thread (Don't forget to read about how to use the BLAM Chat Rooms first).   Or, if you'd like help with prompting, you can post your problem there too.

While you're there, peruse other people's submissions and upvote them by adding reactions.   The BLAM! team will be awarding bucketlist awards to popular entries and our personal favorites. Oh, and keep it work-appropriate please.

Events

Click Here to Subscribe to the BLAM Calendar to see the latest information about all the events planned for the next six weeks.  

NOTE: All past webinar recordings are available here.

Tuesday February 4th at 2pm - BLAM! Seminar for Coders - You've set up Omni-Engineer, Now Let's Build Recording here.

Wednesday February 5th at noon - Prompt Engineering Roundtable - Your Lab Colleagues Share Their Prompting Successes and Learnings. Recording here.

Cool AI/ML Thing of the Week

Starting this week, we'll feature a cool application of AI/ML each week.  This week, it's tools for naturalists - birders specifically.   ML techniques have made "Shazaam for Birds" a reality - where your smartphone can easily identify birds from their songs.   The Merlin app is an amazing application of ML, allowing you to be hiking in the redwoods, pull out your smartphone, and immediately identify what birds are around you.

You can also use these same models to continuously listen for and identify bird sounds.  For example, the BirdWeather live map features listening stations around the world where you can see what birds have been "seen." Here's one of our IT staff's station in Alameda you can check out.   That station is running Birdnet-Go, a fork of Birdnet-PI. 


201: AI Intermediate to Advanced

Learning Resources

It was a big couple of weeks for reasoning models, with DeepSeek R1 debuting late in January and OpenAI's o3-mini coming out last week. 

Go into depth on how DeepSeek works with this Thoughtworks post  and this Stratechery post - both of which are great, accesible but technical deep dives into R1.



Try It

CBorg has o1 mini and o3 mini available for testing.   Try a difficult math, science, or multi-step analysis problem with them and see how it does.   Oh, and watch this space for an announcement coming later this week on the chance to test the most advanced reasoning models on science problems with your colleagues from across the Lab and across DOE (coming soon).


Week 2 Challenge - Intermediate - Coaxing Thinking

The prompt below is a puzzle that many LLMs will fail to get on the first try.   Can you use your advanced prompting skills to coax the correct answer.

Prompt: 

What do these words have in common? Freight Stone Often Canine 

For this challenge, we recommend using Cborg with either Gemini Pro 1.5 or LBL CBorg Chat - these are the "weakest" reasoning models on CBorg currently and thus will provide the most challenge to you.

The answer we're looking for (don't paste this!) is that each word contains a spelled out number.   Can you coax the model to this point without telling it explicitly what to look for?   Does the model randomly want all its answers to be about dogs?   

Don't forget to link to your successes and failures in the Week 2 201 Chat.  See the image at left which shows how to get a link to your chat session you can share with others (click three dots next to your chat name, select share, get the link, copy the link, and paste it into BLAM! Chat).

So why do the models have trouble with this one?   Two reasons.  First, last week we talked about the "how many Rs in strawberry" problem.  LLMs don't see words as individual letters, they see them as chunks - so asking them to look inside words is somewhat unreliable - especially for earlier models.  Second, this is a very open ended reasoning task that isn't part of these models training, making this kind of problem challenging.

Thanks to reddit user u/AlwaysPrivate123 for the inspiration and to NYT who originally published this as a Connections puzzle clue.  



CODERS

Learning Resources

Ready to start integrating GenAI into your workflows?   Google's cloudskillsboost site gives you several course options including "Integrate Generative AI Into Your Data Workflow" which starts with "Gemini for Data Scientists and Analysits".

Try It

OpenAI's o1 and o3 mini models are ranked among the highest for code development.  Try giving them a difficult coding problem and see how they do.  Try it on CBorg. 

Week 1 Challenge - Coders

You've set up Omni-Engineer (last week's challenge), now let's build with it.   During this Tuesday's 2pm webinar, you'll see how to use Omni-Engineer to make a simple demo, then try it yourself.    Need last week's webinar materials?  They're here. 

Go Deeper

AI Coding assistants can be pretty magical, but there are downsides too.   Like humans, the tools don't create perfect code -but by enabling less sophisticated coders to generate more advanced code, new vulnerabilities and issues may emerge.   Reading: AI may create a tidal wave of buggy, vulnerable software.