pandas
python
import pandas as pd import numpy as np from pandas.compat import StringIO
import random import os import itertools import functools import datetime
np.random.seed(123456)
pd.options.display.max_rows=15
import matplotlib # matplotlib.style.use('default')
np.set_printoptions(precision=4, suppress=True)
This is a repository for short and sweet examples and links for useful pandas recipes. We encourage users to add to this documentation.
Adding interesting links and/or inline examples to this section is a great First Pull Request.
Simplified, condensed, new-user friendly, in-line examples have been inserted where possible to augment the Stack-Overflow and GitHub links. Many of the links contain expanded information, above what the in-line examples offer.
Pandas (pd) and Numpy (np) are the only two abbreviated imported modules. The rest are kept explicitly imported for newer users.
These examples are written for python 3.4. Minor tweaks might be necessary for earlier python versions.
These are some neat pandas idioms
if-then/if-then-else on one column, and assignment to another one or more columns:
python
- df = pd.DataFrame(
{'AAA' : [4,5,6,7], 'BBB' : [10,20,30,40],'CCC' : [100,50,-30,-50]}); df
An if-then on one column
python
df.loc[df.AAA >= 5,'BBB'] = -1; df
An if-then with assignment to 2 columns:
python
df.loc[df.AAA >= 5,['BBB','CCC']] = 555; df
Add another line with different logic, to do the -else
python
df.loc[df.AAA < 5,['BBB','CCC']] = 2000; df
Or use pandas where after you've set up a mask
python
df_mask = pd.DataFrame({'AAA' : [True] * 4, 'BBB' : [False] * 4,'CCC' : [True,False] * 2}) df.where(df_mask,-1000)
if-then-else using numpy's where()
python
- df = pd.DataFrame(
{'AAA' : [4,5,6,7], 'BBB' : [10,20,30,40],'CCC' : [100,50,-30,-50]}); df
df['logic'] = np.where(df['AAA'] > 5,'high','low'); df
Split a frame with a boolean criterion
python
- df = pd.DataFrame(
{'AAA' : [4,5,6,7], 'BBB' : [10,20,30,40],'CCC' : [100,50,-30,-50]}); df
dflow = df[df.AAA <= 5]; dflow dfhigh = df[df.AAA > 5]; dfhigh
Select with multi-column criteria
python
- df = pd.DataFrame(
{'AAA' : [4,5,6,7], 'BBB' : [10,20,30,40],'CCC' : [100,50,-30,-50]}); df
...and (without assignment returns a Series)
python
newseries = df.loc[(df['BBB'] < 25) & (df['CCC'] >= -40), 'AAA']; newseries
...or (without assignment returns a Series)
python
newseries = df.loc[(df['BBB'] > 25) | (df['CCC'] >= -40), 'AAA']; newseries;
...or (with assignment modifies the DataFrame.)
python
df.loc[(df['BBB'] > 25) | (df['CCC'] >= 75), 'AAA'] = 0.1; df
Select rows with data closest to certain value using argsort
python
- df = pd.DataFrame(
{'AAA' : [4,5,6,7], 'BBB' : [10,20,30,40],'CCC' : [100,50,-30,-50]}); df
aValue = 43.0 df.loc[(df.CCC-aValue).abs().argsort()]
Dynamically reduce a list of criteria using a binary operators
python
- df = pd.DataFrame(
{'AAA' : [4,5,6,7], 'BBB' : [10,20,30,40],'CCC' : [100,50,-30,-50]}); df
Crit1 = df.AAA <= 5.5 Crit2 = df.BBB == 10.0 Crit3 = df.CCC > -40.0
One could hard code:
python
AllCrit = Crit1 & Crit2 & Crit3
...Or it can be done with a list of dynamically built criteria
python
CritList = [Crit1,Crit2,Crit3] AllCrit = functools.reduce(lambda x,y: x & y, CritList)
df[AllCrit]
The indexing <indexing>
docs.
Using both row labels and value conditionals
python
- df = pd.DataFrame(
{'AAA' : [4,5,6,7], 'BBB' : [10,20,30,40],'CCC' : [100,50,-30,-50]}); df
df[(df.AAA <= 6) & (df.index.isin([0,2,4]))]
Use loc for label-oriented slicing and iloc positional slicing
python
data = {'AAA' : [4,5,6,7], 'BBB' : [10,20,30,40],'CCC' : [100,50,-30,-50]} df = pd.DataFrame(data=data,index=['foo','bar','boo','kar']); df
There are 2 explicit slicing methods, with a third general case
- Positional-oriented (Python slicing style : exclusive of end)
- Label-oriented (Non-Python slicing style : inclusive of end)
- General (Either slicing style : depends on if the slice contains labels or positions)
python df.iloc[0:3] #Positional
df.loc['bar':'kar'] #Label
# Generic df.iloc[0:3] df.loc['bar':'kar']
Ambiguity arises when an index consists of integers with a non-zero start or non-unit increment.
python
df2 = pd.DataFrame(data=data,index=[1,2,3,4]); #Note index starts at 1.
df2.iloc[1:3] #Position-oriented
df2.loc[1:3] #Label-oriented
Using inverse operator (~) to take the complement of a mask
python
- df = pd.DataFrame(
{'AAA' : [4,5,6,7], 'BBB' : [10,20,30,40], 'CCC' : [100,50,-30,-50]}); df
df[~((df.AAA <= 6) & (df.index.isin([0,2,4])))]
python
rng = pd.date_range('1/1/2013',periods=100,freq='D') data = np.random.randn(100, 4) cols = ['A','B','C','D'] df1, df2, df3 = pd.DataFrame(data, rng, cols), pd.DataFrame(data, rng, cols), pd.DataFrame(data, rng, cols)
pf = pd.Panel({'df1':df1,'df2':df2,'df3':df3});pf
pf.loc[:,:,'F'] = pd.DataFrame(data, rng, cols);pf
Mask a panel by using np.where and then reconstructing the panel with the new masked values
Efficiently and dynamically creating new columns using applymap
python
- df = pd.DataFrame(
{'AAA' : [1,2,1,3], 'BBB' : [1,1,2,2], 'CCC' : [2,1,3,1]}); df
source_cols = df.columns # or some subset would work too. new_cols = [str(x) + "_cat" for x in source_cols] categories = {1 : 'Alpha', 2 : 'Beta', 3 : 'Charlie' }
df[new_cols] = df[source_cols].applymap(categories.get);df
Keep other columns when using min() with groupby
python
- df = pd.DataFrame(
{'AAA' : [1,1,1,2,2,2,3,3], 'BBB' : [2,1,3,4,5,1,2,3]}); df
Method 1 : idxmin() to get the index of the mins
python
df.loc[df.groupby("AAA")["BBB"].idxmin()]
Method 2 : sort then take first of each
python
df.sort_values(by="BBB").groupby("AAA", as_index=False).first()
Notice the same results, with the exception of the index.
The multindexing <advanced.hierarchical>
docs.
Creating a multi-index from a labeled frame
python
- df = pd.DataFrame({'row' : [0,1,2],
'One_X' : [1.1,1.1,1.1], 'One_Y' : [1.2,1.2,1.2], 'Two_X' : [1.11,1.11,1.11], 'Two_Y' : [1.22,1.22,1.22]}); df
# As Labelled Index df = df.set_index('row');df # With Hierarchical Columns df.columns = pd.MultiIndex.from_tuples([tuple(c.split('_')) for c in df.columns]);df # Now stack & Reset df = df.stack(0).reset_index(1);df # And fix the labels (Notice the label 'level_1' got added automatically) df.columns = ['Sample','All_X','All_Y'];df
Performing arithmetic with a multi-index that needs broadcasting
python
cols = pd.MultiIndex.from_tuples([ (x,y) for x in ['A','B','C'] for y in ['O','I']]) df = pd.DataFrame(np.random.randn(2,6),index=['n','m'],columns=cols); df df = df.div(df['C'],level=1); df
python
coords = [('AA','one'),('AA','six'),('BB','one'),('BB','two'),('BB','six')] index = pd.MultiIndex.from_tuples(coords) df = pd.DataFrame([11,22,33,44,55],index,['MyData']); df
To take the cross section of the 1st level and 1st axis the index:
python
df.xs('BB',level=0,axis=0) #Note : level and axis are optional, and default to zero
...and now the 2nd level of the 1st axis.
python
df.xs('six',level=1,axis=0)
Slicing a multi-index with xs, method #2
python
index = list(itertools.product(['Ada','Quinn','Violet'],['Comp','Math','Sci'])) headr = list(itertools.product(['Exams','Labs'],['I','II']))
indx = pd.MultiIndex.from_tuples(index,names=['Student','Course']) cols = pd.MultiIndex.from_tuples(headr) #Notice these are un-named
data = [[70+x+y+(x*y)%3 for x in range(4)] for y in range(9)]
df = pd.DataFrame(data,indx,cols); df
All = slice(None)
df.loc['Violet'] df.loc[(All,'Math'),All] df.loc[(slice('Ada','Quinn'),'Math'),All] df.loc[(All,'Math'),('Exams')] df.loc[(All,'Math'),(All,'II')]
Setting portions of a multi-index with xs
Sort by specific column or an ordered list of columns, with a multi-index
python
df.sort_values(by=('Labs', 'II'), ascending=False)
Partial Selection, the need for sortedness;
Prepending a level to a multiindex
The panelnd<dsintro.panelnd>
docs.
The missing data<missing_data>
docs.
Fill forward a reversed timeseries
python
df = pd.DataFrame(np.random.randn(6,1), index=pd.date_range('2013-08-01', periods=6, freq='B'), columns=list('A')) df.loc[df.index[3], 'A'] = np.nan df df.reindex(df.index[::-1]).ffill()
The grouping <groupby>
docs.
Unlike agg, apply's callable is passed a sub-DataFrame which gives you access to all the columns
python
- df = pd.DataFrame({'animal': 'cat dog cat fish dog cat cat'.split(),
'size': list('SSMMMLL'), 'weight': [8, 10, 11, 1, 20, 12, 12], 'adult' : [False] * 5 + [True] * 2}); df
#List the size of the animals with the highest weight. df.groupby('animal').apply(lambda subf: subf['size'][subf['weight'].idxmax()])
python
gb = df.groupby(['animal'])
gb.get_group('cat')
Apply to different items in a group
python
- def GrowUp(x):
avg_weight = sum(x[x['size'] == 'S'].weight * 1.5) avg_weight += sum(x[x['size'] == 'M'].weight * 1.25) avg_weight += sum(x[x['size'] == 'L'].weight) avg_weight /= len(x) return pd.Series(['L',avg_weight,True], index=['size', 'weight', 'adult'])
expected_df = gb.apply(GrowUp)
expected_df
python
S = pd.Series([i / 100.0 for i in range(1,11)])
- def CumRet(x,y):
return x * (1 + y)
- def Red(x):
return functools.reduce(CumRet,x,1.0)
S.expanding().apply(Red)
Replacing some values with mean of the rest of a group
python
df = pd.DataFrame({'A' : [1, 1, 2, 2], 'B' : [1, -1, 1, 2]})
gb = df.groupby('A')
- def replace(g):
mask = g < 0 g.loc[mask] = g[~mask].mean() return g
gb.transform(replace)
Sort groups by aggregated data
python
- df = pd.DataFrame({'code': ['foo', 'bar', 'baz'] * 2,
'data': [0.16, -0.21, 0.33, 0.45, -0.59, 0.62], 'flag': [False, True] * 3})
code_groups = df.groupby('code')
agg_n_sort_order = code_groups[['data']].transform(sum).sort_values(by='data')
sorted_df = df.loc[agg_n_sort_order.index]
sorted_df
Create multiple aggregated columns
python
rng = pd.date_range(start="2014-10-07",periods=10,freq='2min') ts = pd.Series(data = list(range(10)), index = rng)
- def MyCust(x):
- if len(x) > 2:
return x[1] * 1.234
return pd.NaT
mhc = {'Mean' : np.mean, 'Max' : np.max, 'Custom' : MyCust} ts.resample("5min").apply(mhc) ts
Create a value counts column and reassign back to the DataFrame
python
- df = pd.DataFrame({'Color': 'Red Red Red Blue'.split(),
'Value': [100, 150, 50, 50]}); df
df['Counts'] = df.groupby(['Color']).transform(len) df
Shift groups of the values in a column based on the index
python
- df = pd.DataFrame(
- {u'line_race': [10, 10, 8, 10, 10, 8],
u'beyer': [99, 102, 103, 103, 88, 100]}, index=[u'Last Gunfighter', u'Last Gunfighter', u'Last Gunfighter', u'Paynter', u'Paynter', u'Paynter']); df
df['beyer_shifted'] = df.groupby(level=0)['beyer'].shift(1) df
Select row with maximum value from each group
python
- df = pd.DataFrame({'host':['other','other','that','this','this'],
'service':['mail','web','mail','mail','web'], 'no':[1, 2, 1, 2, 1]}).set_index(['host', 'service'])
mask = df.groupby(level=0).agg('idxmax') df_count = df.loc[mask['no']].reset_index() df_count
Grouping like Python's itertools.groupby
python
df = pd.DataFrame([0, 1, 0, 1, 1, 1, 0, 1, 1], columns=['A']) df.A.groupby((df.A != df.A.shift()).cumsum()).groups df.A.groupby((df.A != df.A.shift()).cumsum()).cumsum()
Rolling Computation window based on values instead of counts
Create a list of dataframes, split using a delineation based on logic included in rows.
python
- df = pd.DataFrame(data={'Case' : ['A','A','A','B','A','A','B','A','A'],
'Data' : np.random.randn(9)})
dfs = list(zip(df.groupby((1(df['Case']=='B')).cumsum().rolling(window=3,min_periods=1).median())))[-1]
dfs[0] dfs[1] dfs[2]
The Pivot <reshaping.pivot>
docs.
python
- df = pd.DataFrame(data={'Province' : ['ON','QC','BC','AL','AL','MN','ON'],
'City' : ['Toronto','Montreal','Vancouver','Calgary','Edmonton','Winnipeg','Windsor'], 'Sales' : [13,6,16,8,4,3,1]})
table = pd.pivot_table(df,values=['Sales'],index=['Province'],columns=['City'],aggfunc=np.sum,margins=True) table.stack('City')
Frequency table like plyr in R
python
grades = [48,99,75,80,42,80,72,68,36,78] df = pd.DataFrame( {'ID': ["x%d" % r for r in range(10)], 'Gender' : ['F', 'M', 'F', 'M', 'F', 'M', 'F', 'M', 'M', 'M'], 'ExamYear': ['2007','2007','2007','2008','2008','2008','2008','2009','2009','2009'], 'Class': ['algebra', 'stats', 'bio', 'algebra', 'algebra', 'stats', 'stats', 'algebra', 'bio', 'bio'], 'Participated': ['yes','yes','yes','yes','no','yes','yes','yes','yes','yes'], 'Passed': ['yes' if x > 50 else 'no' for x in grades], 'Employed': [True,True,True,False,False,False,False,True,True,False], 'Grade': grades})
- df.groupby('ExamYear').agg({'Participated': lambda x: x.value_counts()['yes'],
'Passed': lambda x: sum(x == 'yes'), 'Employed' : lambda x : sum(x), 'Grade' : lambda x : sum(x) / len(x)})
Plot pandas DataFrame with year over year data
To create year and month crosstabulation:
python
- df = pd.DataFrame({'value': np.random.randn(36)},
index=pd.date_range('2011-01-01', freq='M', periods=36))
- pd.pivot_table(df, index=df.index.month, columns=df.index.year,
values='value', aggfunc='sum')
Rolling Apply to Organize - Turning embedded lists into a multi-index frame
python
df = pd.DataFrame(data={'A' : [[2,4,8,16],[100,200],[10,20,30]], 'B' : [['a','b','c'],['jj','kk'],['ccc']]},index=['I','II','III'])
- def SeriesFromSubList(aList):
return pd.Series(aList)
df_orgz = pd.concat(dict([ (ind,row.apply(SeriesFromSubList)) for ind,row in df.iterrows() ]))
Rolling Apply with a DataFrame returning a Series
Rolling Apply to multiple columns where function calculates a Series before a Scalar from the Series is returned
python
- df = pd.DataFrame(data=np.random.randn(2000,2)/10000,
index=pd.date_range('2001-01-01',periods=2000), columns=['A','B']); df
- def gm(aDF,Const):
v = ((((aDF.A+aDF.B)+1).cumprod())-1)*Const return (aDF.index[0],v.iloc[-1])
S = pd.Series(dict([ gm(df.iloc[i:min(i+51,len(df)-1)],5) for i in range(len(df)-50) ])); S
Rolling apply with a DataFrame returning a Scalar
Rolling Apply to multiple columns where function returns a Scalar (Volume Weighted Average Price)
python
rng = pd.date_range(start = '2014-01-01',periods = 100) df = pd.DataFrame({'Open' : np.random.randn(len(rng)), 'Close' : np.random.randn(len(rng)), 'Volume' : np.random.randint(100,2000,len(rng))}, index=rng); df
def vwap(bars): return ((bars.Close*bars.Volume).sum()/bars.Volume.sum()) window = 5 s = pd.concat([ (pd.Series(vwap(df.iloc[i:i+window]), index=[df.index[i+window]])) for i in range(len(df)-window) ]); s.round(2)
Constructing a datetime range that excludes weekends and includes only certain times
Aggregation and plotting time series
Turn a matrix with hours in columns and days in rows into a continuous row sequence in the form of a time series. How to rearrange a python pandas DataFrame?
Dealing with duplicates when reindexing a timeseries to a specified frequency
Calculate the first day of the month for each entry in a DatetimeIndex
python
dates = pd.date_range('2000-01-01', periods=5) dates.to_period(freq='M').to_timestamp()
The Resample <timeseries.resampling>
docs.
Using Grouper instead of TimeGrouper for time grouping of values
Time grouping with some missing values
Valid frequency arguments to Grouper
Using TimeGrouper and another grouping to create subgroups, then apply a custom function
Resampling with custom periods
Resample intraday frame without adding new days
The Concat <merging.concatenation>
docs. The Join <merging.join>
docs.
Append two dataframes with overlapping index (emulate R rbind)
python
rng = pd.date_range('2000-01-01', periods=6) df1 = pd.DataFrame(np.random.randn(6, 3), index=rng, columns=['A', 'B', 'C']) df2 = df1.copy()
Depending on df construction, ignore_index
may be needed
python
df = df1.append(df2,ignore_index=True); df
python
- df = pd.DataFrame(data={'Area' : ['A'] * 5 + ['C'] * 2,
'Bins' : [110] * 2 + [160] * 3 + [40] * 2, 'Test_0' : [0, 1, 0, 1, 2, 0, 1], 'Data' : np.random.randn(7)});df
df['Test_1'] = df['Test_0'] - 1
pd.merge(df, df, left_on=['Bins', 'Area','Test_0'], right_on=['Bins', 'Area','Test_1'],suffixes=('_L','_R'))
Join with a criteria based on the values
Using searchsorted to merge based on values inside a range
The Plotting <visualization>
docs.
Setting x-axis major and minor labels
Plotting multiple charts in an ipython notebook
Annotate a time-series plot #2
Generate Embedded plots in excel files using Pandas, Vincent and xlsxwriter
Boxplot for each quartile of a stratifying variable
python
- df = pd.DataFrame(
- {u'stratifying_var': np.random.uniform(0, 100, 20),
u'price': np.random.normal(100, 5, 20)})
- df[u'quartiles'] = pd.qcut(
df[u'stratifying_var'], 4, labels=[u'0-25%', u'25-50%', u'50-75%', u'75-100%'])
@savefig quartile_boxplot.png df.boxplot(column=u'price', by=u'quartiles')
Performance comparison of SQL vs HDF5
The CSV <io.read_csv_table>
docs
Reading only certain rows of a csv chunk-by-chunk
Reading the first few lines of a frame
Reading a file that is compressed but not by gzip/bz2
(the native compressed formats which read_csv
understands). This example shows a WinZipped
file, but is a general application of opening the file within a context manager and using that handle to read. See here
Reading CSV with Unix timestamps and converting to local timezone
Write a multi-row index CSV without writing duplicates
The best way to combine multiple files into a single DataFrame is to read the individual frames one by one, put all of the individual frames into a list, and then combine the frames in the list using pd.concat
:
python
- for i in range(3):
data = pd.DataFrame(np.random.randn(10, 4)) data.to_csv('file{}.csv'.format(i))
files = ['file_0.csv', 'file_1.csv', 'file_2.csv'] result = pd.concat([pd.read_csv(f) for f in files], ignore_index=True)
You can use the same approach to read all files matching a pattern. Here is an example using glob
:
python
import glob files = glob.glob('file*.csv') result = pd.concat([pd.read_csv(f) for f in files], ignore_index=True)
Finally, this strategy will work with the other pd.read_*(...)
functions described in the io docs<io>
.
python
- for i in range(3):
os.remove('file{}.csv'.format(i))
Parsing date components in multi-columns is faster with a format
In [30]: i = pd.date_range('20000101',periods=10000)
In [31]: df = pd.DataFrame(dict(year = i.year, month = i.month, day = i.day))
In [32]: df.head()
Out[32]:
day month year
0 1 1 2000
1 2 1 2000
2 3 1 2000
3 4 1 2000
4 5 1 2000
In [33]: %timeit pd.to_datetime(df.year*10000+df.month*100+df.day,format='%Y%m%d')
100 loops, best of 3: 7.08 ms per loop
# simulate combinging into a string, then parsing
In [34]: ds = df.apply(lambda x: "%04d%02d%02d" % (x['year'],x['month'],x['day']),axis=1)
In [35]: ds.head()
Out[35]:
0 20000101
1 20000102
2 20000103
3 20000104
4 20000105
dtype: object
In [36]: %timeit pd.to_datetime(ds)
1 loops, best of 3: 488 ms per loop
python
- data = """;;;;
- ;;;;
;;;; date;Param1;Param2;Param4;Param5 ;m²;°C;m²;m ;;;; 01.01.1990 00:00;1;1;2;3 01.01.1990 01:00;5;3;4;5 01.01.1990 02:00;9;5;6;7 01.01.1990 03:00;13;7;8;9 01.01.1990 04:00;17;9;10;11 01.01.1990 05:00;21;11;12;13 """
Option 1: pass rows explicitly to skiprows
python
- pd.read_csv(StringIO(data), sep=';', skiprows=[11,12],
index_col=0, parse_dates=True, header=10)
Option 2: read column names and then data
python
pd.read_csv(StringIO(data), sep=';', header=10, nrows=10).columns columns = pd.read_csv(StringIO(data), sep=';', header=10, nrows=10).columns pd.read_csv(StringIO(data), sep=';', index_col=0, header=12, parse_dates=True, names=columns)
The SQL <io.sql>
docs
Reading from databases with SQL
The Excel <io.excel>
docs
Reading from a filelike handle
Modifying formatting in XlsxWriter output
Reading HTML tables from a server that cannot handle the default request header
The HDFStores <io.hdf5>
docs
Simple Queries with a Timestamp Index
Managing heterogeneous data using a linked multiple table hierarchy
Merging on-disk tables with millions of rows
Avoiding inconsistencies when writing to a store from multiple processes/threads
De-duplicating a large store by chunks, essentially a recursive reduction operation. Shows a function for taking in data from csv file and creating a store by chunks, with date parsing as well. See here
Creating a store chunk-by-chunk from a csv file
Appending to a store, while creating a unique index
Reading in a sequence of files, then providing a global unique index to a store while appending
Groupby on a HDFStore with low group density
Groupby on a HDFStore with high group density
Hierarchical queries on a HDFStore
Troubleshoot HDFStore exceptions
Setting min_itemsize with strings
Using ptrepack to create a completely-sorted-index on a store
Storing Attributes to a group node
python
df = pd.DataFrame(np.random.randn(8,3)) store = pd.HDFStore('test.h5') store.put('df',df)
# you can store an arbitrary python object via pickle store.get_storer('df').attrs.my_attribute = dict(A = 10) store.get_storer('df').attrs.my_attribute
python
store.close() os.remove('test.h5')
pandas readily accepts numpy record arrays, if you need to read in a binary file consisting of an array of C structs. For example, given this C program in a file called main.c
compiled with gcc main.c -std=gnu99
on a 64-bit machine,
#include <stdio.h>
#include <stdint.h>
typedef struct _Data
{
int32_t count;
double avg;
float scale;
} Data;
int main(int argc, const char *argv[])
{
size_t n = 10;
Data d[n];
for (int i = 0; i < n; ++i)
{
d[i].count = i;
d[i].avg = i + 1.0;
d[i].scale = (float) i + 2.0f;
}
FILE *file = fopen("binary.dat", "wb");
fwrite(&d, sizeof(Data), n, file);
fclose(file);
return 0;
}
the following Python code will read the binary file 'binary.dat'
into a pandas DataFrame
, where each element of the struct corresponds to a column in the frame:
names = 'count', 'avg', 'scale'
# note that the offsets are larger than the size of the type because of
# struct padding
offsets = 0, 8, 16
formats = 'i4', 'f8', 'f4'
dt = np.dtype({'names': names, 'offsets': offsets, 'formats': formats},
align=True)
df = pd.DataFrame(np.fromfile('binary.dat', dt))
Note
The offsets of the structure elements may be different depending on the architecture of the machine on which the file was created. Using a raw binary file format like this for general data storage is not recommended, as it is not cross platform. We recommended either HDF5 or msgpack, both of which are supported by pandas' IO facilities.
Numerical integration (sample-based) of a time series
The Timedeltas <timedeltas.timedeltas>
docs.
python
s = pd.Series(pd.date_range('2012-1-1', periods=3, freq='D'))
s - s.max()
s.max() - s
s - datetime.datetime(2011,1,1,3,5)
s + datetime.timedelta(minutes=5)
datetime.datetime(2011,1,1,3,5) - s
datetime.timedelta(minutes=5) + s
Adding and subtracting deltas and dates
python
deltas = pd.Series([ datetime.timedelta(days=i) for i in range(3) ])
df = pd.DataFrame(dict(A = s, B = deltas)); df
df['New Dates'] = df['A'] + df['B'];
df['Delta'] = df['A'] - df['New Dates']; df
df.dtypes
Values can be set to NaT using np.nan, similar to datetime
python
y = s - s.shift(); y
y[1] = np.nan; y
To globally provide aliases for axis names, one can define these 2 functions:
python
- def set_axis_alias(cls, axis, alias):
- if axis not in cls._AXIS_NUMBERS:
raise Exception("invalid axis [%s] for alias [%s]" % (axis, alias))
cls._AXIS_ALIASES[alias] = axis
python
- def clear_axis_alias(cls, axis, alias):
- if axis not in cls._AXIS_NUMBERS:
raise Exception("invalid axis [%s] for alias [%s]" % (axis, alias))
cls._AXIS_ALIASES.pop(alias,None)
python
set_axis_alias(pd.DataFrame,'columns', 'myaxis2') df2 = pd.DataFrame(np.random.randn(3,2),columns=['c1','c2'],index=['i1','i2','i3']) df2.sum(axis='myaxis2') clear_axis_alias(pd.DataFrame,'columns', 'myaxis2')
To create a dataframe from every combination of some given values, like R's expand.grid()
function, we can create a dict where the keys are column names and the values are lists of the data values:
python
- def expand_grid(data_dict):
rows = itertools.product(*data_dict.values()) return pd.DataFrame.from_records(rows, columns=data_dict.keys())
- df = expand_grid(
- {'height': [60, 70],
'weight': [100, 140, 180], 'sex': ['Male', 'Female']})
df