Write a python code: using multiprocessing, each thread reads pandas dataframe, applies some function on it, and copies the result into python list that is accessible to all threads
Here's an example of how you could use the multiprocessing module in Python to apply a function to a Pandas DataFrame in parallel using multiple threads:
``` 
import pandas as pd
import multiprocessing as mp

def process_dataframe(df, results, idx):
    # Apply some function to the DataFrame
    result = df.sum()
    # Store the result in the shared list
    results[idx] = result

if __name__ == '__main__':
    # Load the DataFrame from a file
    df = pd.read_csv('data.csv')
    # Split the DataFrame into chunks
    chunk_size = len(df) // 4
    chunks = [df[i:i + chunk_size] for i in range(0, len(df), chunk_size)]
    # Create a shared list to store the results
    manager = mp.Manager()
    results = manager.list([None] * len(chunks))
    # Start the worker threads
    processes = []
    for i, chunk in enumerate(chunks):
        p = mp.Process(target=process_dataframe, args=(chunk, results, i))
        processes.append(p)
        p.start()
    # Wait for the worker threads to finish
    for p in processes:
        p.join()
    # Print the results
    print(results)
``` 
This code will split the DataFrame into 4 chunks and apply the process_dataframe function to each chunk in parallel using 4 worker threads. The results of the function will be stored in a shared list results that can be accessed by all threads. The main thread will wait for all worker threads to finish before printing the final results.