I wish to write a python program that:
1. extracts the 200 most highly ranked movies on the IMDB website.
2. creates a list of their names and metadata (year, producers, director, screenwriter, list of actors, etc.) 
3. for each film, the program searches google for ~100 reviews of those movies by film critics and stores them in a separate file for each review along with it's metadata
Extracting the 200 most highly ranked movies on the IMDB website can be done using web scraping techniques. Here's an example program that uses BeautifulSoup and requests libraries to extract the movie titles and their links:

```python
import requests
from bs4 import BeautifulSoup

url = https://www.imdb.com/chart/top
response = requests.get(url)
soup = BeautifulSoup(response.text, html.parser)

# Find the list of movie titles
movie_titles = soup.select(td.titleColumn a)

# Get the top 200 movie titles and links
top_200_movies = [(title.text.strip(), https://www.imdb.com + title[href]) for title in movie_titles[:200]]
```

Once you have the list of movie titles and links, you can use the IMDbPY library to extract the metadata for each movie:

```python
import imdb

ia = imdb.IMDb()

# Function to extract metadata for a movie
def get_movie_metadata(movie_url):
    imdb_id = movie_url.split(/)[-2]
    movie = ia.get_movie(imdb_id)

    return {
        title: movie.get(title),
        year: movie.get(year),
        producers: [person.get(name) for person in movie.get(producers)],
        director: [person.get(name) for person in movie.get(directors)],
        screenwriter: [person.get(name) for person in movie.get(writers)],
        actors: [person.get(name) for person in movie.get(cast)]
    }

# Get the metadata for each movie
movie_metadata = [get_movie_metadata(url) for title, url in top_200_movies]
```

To search Google for reviews of each movie, you can use the Google Custom Search API or a web scraping library like BeautifulSoup and requests. Here's an example using BeautifulSoup and requests:

```python
import os
import time
import requests
from bs4 import BeautifulSoup

# Set up the Google search query
base_url = https://www.google.com/search?q=
query_template = {} movie review site:{}.com
movie_sites = [nytimes, variety, hollywoodreporter]

# Function to search Google for reviews of a movie
def search_movie_reviews(title):
    reviews = []
    for site in movie_sites:
        query = query_template.format(title, site)
        url = base_url + +.join(query.split())
        response = requests.get(url)
        soup = BeautifulSoup(response.text, html.parser)
        review_links = soup.select(a[href*=https:// + site + .com/])
        for link in review_links:
            review_url = link[href]
            review_title = link.text.strip()
            reviews.append({title: review_title, url: review_url, source: site})
        # Sleep for 5 seconds to avoid overwhelming the server
        time.sleep(5)
    return reviews

# Create a directory to store the reviews
if not os.path.exists(reviews):
    os.mkdir(reviews)

# Search for reviews of each movie and save them to a separate file
for movie in movie_metadata:
    reviews = search_movie_reviews(movie[title])
    filename = reviews/ + movie[title].replace( , _) + .txt
    with open(filename, w) as f:
        f.write(Movie: {}\n.format(movie[title]))
        f.write(Year: {}\n.format(movie[year]))
        f.write(Producers: {}\n.format(, .join(movie[producers])))
        f.write(Director: {}\n.format(, .join(movie[director])))
        f.write(Screenwriter: {}\n
```
Will this work?