-
Notifications
You must be signed in to change notification settings - Fork 39.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Lesson 4 - Intro to Prompt Engineering #27
Lesson 4 - Intro to Prompt Engineering #27
Conversation
Converting to draft so I can make some changes. |
By the end of this lesson you will be able to: | ||
- Describe Prompt Engineering - what it is, and why it matters to generative AI apps. | ||
- Discuss Real-World Prompt Examples - illustrate their value and highlight their limitations. | ||
- Apply Prompt-Engineering Techniques - to iterate & validate responses till desired criteria is met. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
"desired criteria is met." to "the desired criteria are met"
|
||
By the end of this lesson you will be able to: | ||
- Describe Prompt Engineering - what it is, and why it matters to generative AI apps. | ||
- Discuss Real-World Prompt Examples - illustrate their value and highlight their limitations. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Instead of "Real-World Prompt Examples" maybe add "for building our Education Startup product"
Define it and explain why it is needed. | ||
--> | ||
|
||
In this lesson unit, we'll focus on answering two questions: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would suggest adding a 3rd question: "What makes a prompt a good prompt?" . You sort of answer this on in the Prompt Mindset.
|
||
This will set the stage for you to explore more _advanced engineering techniques_ in the next lesson. It should also help you **apply these learnings** to your real-world application by answering this question: | ||
|
||
> _How can better prompt engineering help me deliver an enhanced experience to students, educators, adminstrators and other user audiences, in my education startup_. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
"adminstrators" to "administrators"
|
||
Just like with rubrics, prompts can benefit from an _iterate and validate_ process where we design the prompt, then see how well those instructions were understood by analyzing the responses, then refine the prompt and try again. Iterate till results are _closer_ to our expectations. | ||
|
||
But why do we need an entire **prompt engineering discipline** with tools, techniques and best practices, for use in generative AI applications? Shouldn't our intuition be enough? It's because LLMs are great at _generating content_ but are clueless about the _meaning_ of the content they just created. So they can't tell if the output was relevantand met your expectations for quality. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
"relevantand met" to "relevant and met"
|
||
Here are some of the challenges that prompt engineering is trying to address better: | ||
|
||
1. **Model responses are stochastic.** The same model may produce different results for the same prompt input. This can lead to inconsistent user experiences in your generative AI apps, and have an impact on follow-up actions or workflows driven from it. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The numbering seems to be off here for this list. It is all 1's.
Here are some of the challenges that prompt engineering is trying to address better: | ||
|
||
1. **Model responses are stochastic.** The same model may produce different results for the same prompt input. This can lead to inconsistent user experiences in your generative AI apps, and have an impact on follow-up actions or workflows driven from it. | ||
1. **Models can hallucinate responses.** The model can return responses that are incorrect, imaginary, or contradictory to known facts. Because LLMs use _pre-trained models_ (based on massive but finite training data) they can lack knowledge about concepts outside that trained scope. And since they don't provide citations for their responses, we have no way of knowing if outputs were valid or not. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
" if outputs" to " if the outputs"
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not a major deal but since we are using Azure Open AI throughout the rest of the course, could you use the Azure Open AI playground and then reference that it will work the same on the OpenAI playground as well?
|
||
 | ||
|
||
Let's ask it to limit the length to 3 sentencees. _This is much better aligned to what I wanted!_ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
sentencees to sentences
Merging Origin/Main (Codespaces Updates)
Hey @softchris - started PR for Lesson 4. Not sure how I tag Maxim for reviewer here but thought you could review and make sure this will align with the follow-up advanced lesson
@koreyspace hope to check in with you for the notebooks part.