Skip to content

Commit

Permalink
📏
Browse files Browse the repository at this point in the history
  • Loading branch information
transitive-bullshit committed Dec 21, 2023
1 parent 025c5fe commit 7eb670d
Show file tree
Hide file tree
Showing 3 changed files with 206 additions and 2 deletions.
Original file line number Diff line number Diff line change
@@ -0,0 +1,106 @@
# Daily Digest: Are you prepared? For what's next from OpenAI?

### PLUS: Bard adds more features

[Sign up](https://www.bensbites.co/?utm_source=bensbites\&utm_medium=referral\&utm_campaign=daily-digest-are-you-prepared-for-what-s-next-from-openai)|[Advertise](https://sponsor.bensbites.co/?utm_source=bensbites\&utm_medium=referral\&utm_campaign=daily-digest-are-you-prepared-for-what-s-next-from-openai)|[Ben’s Bites News](https://news.bensbites.co/?utm_source=bensbites\&utm_medium=referral\&utm_campaign=daily-digest-are-you-prepared-for-what-s-next-from-openai)\
Daily Digest #309

Hello folks, here’s what we have today;

###### **PICKS**

1. [Preparedness framework from OpenAI](https://openai.com/safety/preparedness?utm_source=bensbites\&utm_medium=referral\&utm_campaign=daily-digest-are-you-prepared-for-what-s-next-from-openai)- OpenAI just put together another\*\*safety squad called the Preparedness Team.\*\*They also cooked up something called the Preparedness Framework to go with it. Basically, they wanna make sure their next-level AI models are chill to release into the world, and not gonna cause any trouble.🍿[Our Summary](https://bensbites.beehiiv.com/p/open-ais-preparedness-framework)(also below)

2. **3 mini updates for Bard:**

1. YouTube, Gmail, Maps, etc. aka Bard Extensions are now available in Japanese and Korean as well.

2. Export to Replit now works for 18+ programming languages, including C++, Javascript, Ruby and Swift.

3. Bard UK has Gemini Pro running behind it. Finally!

from our sponsor

With Botsonic’s GPT Bot Builder, make a personalized assistant for unique tasks. Be it mastering yoga, helping kids with homework or planning recipes, it’s all a click away.

Customize with your data and branding! Embed it on your website, WhatsApp or Slack.

Like OpenAI’s GPT Builder but with superpowers!

Ben’s Bites readers get 20% discount with code**BENSBITES20**.[Get started today!](https://writesonic.com/gpt-builder?utm_source=bens-bites\&utm_medium=newsletter\&utm_campaign=botsonic-GPT-Botbuilder)💥

###### **TOP TOOLS**

- [VoiceDual](https://www.voicedual.com/?utm_source=bensbites\&utm_medium=referral\&utm_campaign=daily-digest-are-you-prepared-for-what-s-next-from-openai)-**Transform your voice**with AI.

- [Vexa Search](https://vexasearch.com/?utm_source=bensbites\&utm_medium=referral\&utm_campaign=daily-digest-are-you-prepared-for-what-s-next-from-openai)- Explore the depths of**knowledge through images**.

- [GuestLab AI](https://guestlab.ai/?utm_source=bensbites\&utm_medium=referral\&utm_campaign=daily-digest-are-you-prepared-for-what-s-next-from-openai)- Hours of**guest research**delivered in seconds.

- [Martian](https://withmartian.com/?utm_source=bensbites\&utm_medium=referral\&utm_campaign=daily-digest-are-you-prepared-for-what-s-next-from-openai)-**Route dynamically**between multiple models, reduce costs by 20%-97%.

[View more →](https://news.bensbites.co/tags/show?utm_source=bensbites\&utm_medium=referral\&utm_campaign=daily-digest-are-you-prepared-for-what-s-next-from-openai)

###### **NEWS**

**AI governance:**

- OpenAI says the\*\*[board can overrule CEO](https://www.bloomberg.com/news/articles/2023-12-18/openai-says-board-can-overrule-ceo-on-safety-of-new-ai-releases?utm_source=bensbites\&utm_medium=referral\&utm_campaign=daily-digest-are-you-prepared-for-what-s-next-from-openai)\*\*on safety of new AI releases.

- OpenAI overhauls\*\*[content moderation efforts](https://www.theinformation.com/articles/openai-overhauls-content-moderation-efforts-as-elections-loom?rc=bdorru\&utm_source=bensbites\&utm_medium=referral\&utm_campaign=daily-digest-are-you-prepared-for-what-s-next-from-openai)\*\*as elections loom.

**Model-mania:**

- Many options for\*\*[running Mistral models](https://simonwillison.net/2023/Dec/18/mistral/?utm_source=bensbites\&utm_medium=referral\&utm_campaign=daily-digest-are-you-prepared-for-what-s-next-from-openai)\*\*in your terminal using LLM.

- **[OpenChat-3.5-1210](https://twitter.com/openchatdev/status/1736840031266918616?utm_source=bensbites\&utm_medium=referral\&utm_campaign=daily-digest-are-you-prepared-for-what-s-next-from-openai)**, new 7B open-source surpassing GPT 3.5 and Grok models.

- How we optimized\*\*[Mistral 7B for fine-tuning.](https://openpipe.ai/blog/mistral-7b-fine-tune-optimized?utm_source=bensbites\&utm_medium=referral\&utm_campaign=daily-digest-are-you-prepared-for-what-s-next-from-openai)\*\*

**Money trails:**

- \*\*[Martian raised $9M](https://twitter.com/withmartian/status/1736845543266263077?utm_source=bensbites\&utm_medium=referral\&utm_campaign=daily-digest-are-you-prepared-for-what-s-next-from-openai)\*\*for dynamic LLM routing.

- \*\*[IBM spends $2.3B](https://techcrunch.com/2023/12/18/ibm-to-acquire-streamsets-and-webmethods-from-software-ag/?utm_source=bensbites\&utm_medium=referral\&utm_campaign=daily-digest-are-you-prepared-for-what-s-next-from-openai)\*\*to acquire StreamSets and WebMethods from Software AG.

[View more →](https://news.bensbites.co/tags/news/trending?utm_source=bensbites\&utm_medium=referral\&utm_campaign=daily-digest-are-you-prepared-for-what-s-next-from-openai)

###### **QUICK BITES**

OpenAI just put together another safety squad called the[Preparedness Team.](https://openai.com/safety/preparedness?utm_source=bensbites\&utm_medium=referral\&utm_campaign=daily-digest-are-you-prepared-for-what-s-next-from-openai)They also cooked up something called the Preparedness Framework to go with it.

Basically, they wanna make sure their next-level AI models are chill to release into the world, and not gonna cause any trouble.

**What is going on here?**

OpenAI is setting another framework to evaluate whether the humans and models are prepared to face each other.

![](https://media.beehiiv.com/cdn-cgi/image/fit=scale-down,format=auto,onerror=redirect,quality=80/uploads/asset/file/4c7105c6-146f-41c0-a607-bce551e47fcb/image.png?t=1702987769)

**What does this mean?**

Open AI has many safety teams within it. The Superalignment team focuses on existential risks, and talks about artificial superintelligence that will surpass humans. At the same time, it has model safety teams that make sure models like GPT-3.5 and GPT-4 are safe for everyday use.

This new preparedness team will focus on the soon-to-come risks of the most advanced AI models AKA frontier models. Its work will be grounded in fact with a builder mindset.

![](https://media.beehiiv.com/cdn-cgi/image/fit=scale-down,format=auto,onerror=redirect,quality=80/uploads/asset/file/81da11f9-6e70-4aaf-90bb-c6e56944426d/image.png?t=1702987463)

The framework breaks it down into different areas like hacking risks, how persuasive the models could be to humans, how independently they can act, and more. They'll give each model a safety rating -**low, medium, high or critical**risk. Only the low and medium-risk ones get the green light to launch. High-risk models can be developed further. You can get more details about the[framework (beta) here.](https://cdn.openai.com/openai-preparedness-framework-beta.pdf?utm_source=bensbites\&utm_medium=referral\&utm_campaign=daily-digest-are-you-prepared-for-what-s-next-from-openai)

The preparedness team will do the technical work for evaluating the models, and then with external outputs from safety advisors, OpenAI leadership will make the final decisions. The Board of Directors (yes, the infamous board) has the power to reverse the decisions if they feel the models are not safe enough.

**Why should I care?**

Recently, Deepmind’s LLM solved a previously unsolved maths problem. Another research paper showed that visual models can solve captchas better than humans now. So, if AI models can create new toxins, find new loopholes in security systems, or use your computer automatically, the risk factor from them increases. These risks are more plausible than the hypothetical scenarios of AI bots killing humans. So having a framework for knowing these models’ limits is crucial to be proactive while developing these models, instead of patching up afterwards.

One hyperbole from all the safety stuff OpenAI has been releasing in the last few days is that OpenAI has a new model with higher intelligence (and risks) and the team is preparing the ground for its release. Again just a hyperbole/rumor/speculation, whatever you want to call it.

[*Share this story*](https://bensbites.beehiiv.com/p/open-ais-preparedness-framework)

### Ben’s Bites Insights

We have 2 databases that are updated daily which you can access by sharing Ben’s Bites using the link below;

- **All 10k+ links**we’ve covered, easily filterable (1 referral)

- **6k+ AI company funding rounds**from Jan 2022, including investors, amounts, stage etc (3 referrals)
73 changes: 71 additions & 2 deletions fixtures/bensbites.beehiiv.com/newsletter.json
Original file line number Diff line number Diff line change
Expand Up @@ -75,9 +75,10 @@
"email_sender_name": "Ben's Bites",
"render_authors_widget": false,
"has_referral_program": true,
"has_recommendations": true,
"has_recommendations": false,
"beehiiv_branding": true,
"has_polls": true,
"stripe_payment_method_domain_enabled": false,
"has_pages": true,
"language": "en",
"configured_domain": "bensbites.beehiiv.com",
Expand Down Expand Up @@ -128,7 +129,7 @@
"title": "Recommendations Page",
"href": "/recommendations",
"managed_type": "recommendations",
"enabled": true,
"enabled": false,
"full_url": "https://bensbites.beehiiv.com/recommendations",
"action_text": "Remove",
"modal_header": "Remove any recommendations or boosts first!",
Expand Down Expand Up @@ -273,6 +274,74 @@
]
},
"posts": [
{
"id": "5b1195fa-737b-4edc-b825-0023f31abeff",
"publication_id": "447f6e60-e36a-4642-b6f8-46beb19045ec",
"web_title": "Daily Digest: Are you prepared? For what's next from OpenAI?",
"web_subtitle": "PLUS: Bard adds more features",
"status": "published",
"override_scheduled_at": "2023-12-19T14:00:00.000Z",
"slug": "daily-digest-prepared-whats-next-openai",
"image_url": "https://beehiiv-images-production.s3.amazonaws.com/uploads/asset/file/64eac329-f52c-4e82-8363-173ab2f415f9/Blue_Level_3.png?t=1702985035",
"meta_default_title": "Daily Digest: Are you prepared? For what's next from OpenAI?",
"meta_default_description": "PLUS: Bard adds more features",
"meta_og_title": "Daily Digest: Are you prepared? For what's next from OpenAI?",
"meta_og_description": "PLUS: Bard adds more features",
"meta_twitter_title": "Daily Digest: Are you prepared? For what's next from OpenAI?",
"meta_twitter_description": "PLUS: Bard adds more features",
"audience": "free",
"comments_enabled": true,
"comments_state": "default",
"enforce_gated_content": false,
"enable_popup_on_scroll": true,
"email_capture_title": "Join 100,000+ others",
"email_capture_message": "Stay informed and up to date on AI",
"email_capture_cta": "Subscribe",
"authors": [],
"content_tags": [
{
"id": "64ea972c-91c5-406a-a512-1d6152696293",
"display": "📬 Daily Digest"
}
],
"created_at": "2023-12-19T05:21:23Z",
"updated_at": "2023-12-19T14:00:05Z",
"url": "https://bensbites.beehiiv.com/p/daily-digest-prepared-whats-next-openai"
},
{
"id": "e717786d-fa69-4b3d-9193-4e8157f67984",
"publication_id": "447f6e60-e36a-4642-b6f8-46beb19045ec",
"web_title": "What is Open AI's Preparedness Framework",
"web_subtitle": null,
"status": "published",
"override_scheduled_at": "2023-12-19T12:06:18.385Z",
"slug": "open-ais-preparedness-framework",
"image_url": "https://beehiiv-images-production.s3.amazonaws.com/uploads/asset/file/b184070c-0faa-4d88-a21b-c548c2659d70/image.png?t=1702986409",
"meta_default_title": "What is Open AI's Preparedness Framework",
"meta_default_description": "OpenAI has formed another team for AI safety and this one’s called Preparedness Team. There’s a Preparedness Framework that goes with it.",
"meta_og_title": "What is Open AI's Preparedness Framework",
"meta_og_description": "OpenAI has formed another team for AI safety and this one’s called Preparedness Team. There’s a Preparedness Framework that goes with it.",
"meta_twitter_title": "What is Open AI's Preparedness Framework",
"meta_twitter_description": "OpenAI has formed another team for AI safety and this one’s called Preparedness Team. There’s a Preparedness Framework that goes with it.",
"audience": "free",
"comments_enabled": true,
"comments_state": "default",
"enforce_gated_content": false,
"enable_popup_on_scroll": true,
"email_capture_title": "Join 100,000+ others",
"email_capture_message": "Stay informed and up to date on AI",
"email_capture_cta": "Subscribe",
"authors": [],
"content_tags": [
{
"id": "4b4f44ed-2510-4e0e-b4d5-74f57e40d0f1",
"display": "🍿 Quick Bites"
}
],
"created_at": "2023-12-19T11:22:13Z",
"updated_at": "2023-12-19T12:09:07Z",
"url": "https://bensbites.beehiiv.com/p/open-ais-preparedness-framework"
},
{
"id": "f45671b2-3746-4297-b9e7-9fb773286b4c",
"publication_id": "447f6e60-e36a-4642-b6f8-46beb19045ec",
Expand Down
29 changes: 29 additions & 0 deletions fixtures/bensbites.beehiiv.com/open-ais-preparedness-framework.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,29 @@
# What is Open AI's Preparedness Framework

OpenAI just put together another safety squad called the [Preparedness Team.](https://openai.com/safety/preparedness?utm_source=bensbites\&utm_medium=referral\&utm_campaign=what-is-open-ai-s-preparedness-framework)They also cooked up something called the Preparedness Framework to go with it.

Basically, they wanna make sure their next-level AI models are chill to release into the world, and not gonna cause any trouble.

## What’s going on here?

OpenAI is setting another framework to evaluate whether the humans and models are prepared to face each other.

![](https://media.beehiiv.com/cdn-cgi/image/fit=scale-down,format=auto,onerror=redirect,quality=80/uploads/asset/file/b184070c-0faa-4d88-a21b-c548c2659d70/image.png?t=1702986409)

## What does that mean?

Open AI has many safety teams within it. The Superalignment team focuses on existential risks, and talks about artificial superintelligence that will surpass humans. At the same time, it has model safety teams that make sure models like GPT-3.5 and GPT-4 are safe for everyday use.

This new preparedness team will focus on the soon-to-come risks of the most advanced AI models AKA frontier models. Its work will be grounded in fact with a builder mindset.

![](https://media.beehiiv.com/cdn-cgi/image/fit=scale-down,format=auto,onerror=redirect,quality=80/uploads/asset/file/81da11f9-6e70-4aaf-90bb-c6e56944426d/image.png?t=1702987463)

The framework breaks it down into different areas like hacking risks, how persuasive the models could be to humans, how independently they can act, and more. They'll give each model a safety rating - **low, medium, high or critical**risk. Only the low and medium-risk ones get the green light to launch. High-risk models can be developed further. You can get more details about the [framework (beta) here.](https://cdn.openai.com/openai-preparedness-framework-beta.pdf?utm_source=bensbites\&utm_medium=referral\&utm_campaign=what-is-open-ai-s-preparedness-framework)

The preparedness team will do the technical work for evaluating the models, and then with external outputs from safety advisors, OpenAI leadership will make the final decisions. The Board of Directors (yes, the infamous board) has the power to reverse the decisions if they feel the models are not safe enough.

## Why should I care?

Recently, Deepmind’s LLM solved a previously unsolved maths problem. Another research paper showed that visual models can solve captchas better than humans now. So, if AI models can create new toxins, find new loopholes in security systems, or use your computer automatically, the risk factor from them increases. These risks are more plausible than the hypothetical scenarios of AI bots killing humans. So having a framework for knowing these models’ limits is crucial to be proactive while developing these models, instead of patching up afterwards.

One hyperbole from all the safety stuff OpenAI has been releasing in the last few days is that OpenAI has a new model with higher intelligence (and risks) and the team is preparing the ground for its release. Again just a hyperbole/rumor/speculation, whatever you want to call it.

1 comment on commit 7eb670d

@vercel
Copy link

@vercel vercel bot commented on 7eb670d Dec 21, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please sign in to comment.