Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Memory usage increase continuously #127

Open
wjliu opened this issue Apr 1, 2017 · 4 comments
Open

Memory usage increase continuously #127

wjliu opened this issue Apr 1, 2017 · 4 comments

Comments

@wjliu
Copy link

wjliu commented Apr 1, 2017

Hi @deepakn94 ,
When I used grizzly in my python program, I found that the process of the program was killed automatically. Through debugging, I found the reason is memory usage increase continuously.
I extracted a piece of the major logic , just load data from csv and then query operations, details as below:

import pandas as pd
import grizzly.grizzly as gr
import grizzly.numpy_weld as gn

df = pd.read_csv("total_price_completed.csv")
weld_df = gr.DataFrameWeld(df)
price_df = weld_df[weld_df['name'] == '000001.SZ']
price_list = price_df['open']
result_list = price_list.evaluate(verbose=False)

Above code was executed many times in a for loop, so memory usage reached the limit and the process was killed by system.

@deepakn94
Copy link
Collaborator

Hi @wjliu,
Can you maybe try calling this free function to see if it helps (you might need to wrap result_list in a WeldValue object first though)?

Let me know if this doesn't fix your problem and I will take a closer look.

@wjliu
Copy link
Author

wjliu commented Apr 5, 2017

Hi @deepakn94 ,
I can't use WeldValue to wrap result_list, because it needs a void pointer of C type. How should I use WeldValue ? I just started learning Python, please give me your advice, thanks.

@deepakn94
Copy link
Collaborator

Hmm, I might need to do some work here to get this to work cleanly. (basically we want the WeldValue returned by the Weld program to be stored somewhere in Python so that we can easily free it when required)
I will keep you posted, @wjliu.

@zuowang
Copy link

zuowang commented Jun 5, 2017

reproduced with this scripts:
examples/python/grizzly/data_cleaning_grizzly.py

#!/usr/bin/python

# The usual preamble
import pandas as pd
import grizzly.grizzly as gr
import time

# Get data (NYC 311 service request dataset) and start cleanup
na_values = ['NO CLUE', 'N/A', '0']
raw_requests = pd.read_csv('data/311-service-requests.csv',
                           na_values=na_values, dtype={'Incident Zip': str})
print "Done reading input file..."

for i in range(500):
    requests = gr.DataFrameWeld(raw_requests)
    start = time.time()

    # Fix requests with extra digits
    requests['Incident Zip'] = requests['Incident Zip'].str.slice(0, 5)

    # Fix requests with 00000 zipcodes
    zero_zips = requests['Incident Zip'] == '00000'
    requests['Incident Zip'][zero_zips] = "nan"

    # Display unique incident zips again (this time cleaned)
    print requests['Incident Zip'].unique().evaluate()
    end = time.time()

    print "Total end-to-end time, including compilation: %.2f" % (end - start)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants