This notebook contains an excerpt from the Python Data Science Handbook by Jake VanderPlas; the content is available on GitHub.
The text is released under the CC-BY-NC-ND license, and code is released under the MIT license. If you find this content useful, please consider supporting the work by buying the book!
Combining Datasets: Concat and Append¶
Some of the most interesting studies of data come from combining different data sources. These operations can involve anything from very straightforward concatenation of two different datasets, to more complicated database-style joins and merges that correctly handle any overlaps between the datasets. Series
and DataFrame
s are built with this type of operation in mind, and Pandas includes functions and methods that make this sort of data wrangling fast and straightforward.
Here we'll take a look at simple concatenation of Series
and DataFrame
s with the pd.concat
function; later we'll dive into more sophisticated in-memory merges and joins implemented in Pandas.
We begin with the standard imports:
import pandas as pd
import numpy as np
For convenience, we'll define this function which creates a DataFrame
of a particular form that will be useful below:
def make_df(cols, ind):
"""Quickly make a DataFrame"""
data = {c: [str(c) + str(i) for i in ind]
for c in cols}
return pd.DataFrame(data, ind)
# example DataFrame
make_df('ABC', range(3))
A | B | C | |
---|---|---|---|
0 | A0 | B0 | C0 |
1 | A1 | B1 | C1 |
2 | A2 | B2 | C2 |
In addition, we'll create a quick class that allows us to display multiple DataFrame
s side by side. The code makes use of the special _repr_html_
method, which IPython uses to implement its rich object display:
class display(object):
"""Display HTML representation of multiple objects"""
template = """<div style="float: left; padding: 10px;">
<p style='font-family:"Courier New", Courier, monospace'>{0}</p>{1}
</div>"""
def __init__(self, *args):
self.args = args
def _repr_html_(self):
return '\n'.join(self.template.format(a, eval(a)._repr_html_())
for a in self.args)
def __repr__(self):
return '\n\n'.join(a + '\n' + repr(eval(a))
for a in self.args)
{0}
{1}The use of this will become clearer as we continue our discussion in the following section.
Recall: Concatenation of NumPy Arrays¶
Concatenation of Series
and DataFrame
objects is very similar to concatenation of Numpy arrays, which can be done via the np.concatenate
function as discussed in The Basics of NumPy Arrays. Recall that with it, you can combine the contents of two or more arrays into a single array:
x = [1, 2, 3]
y = [4, 5, 6]
z = [7, 8, 9]
np.concatenate([x, y, z])
array([1, 2, 3, 4, 5, 6, 7, 8, 9])
The first argument is a list or tuple of arrays to concatenate. Additionally, it takes an axis
keyword that allows you to specify the axis along which the result will be concatenated:
x = [[1, 2],
[3, 4]]
np.concatenate([x, x], axis=1)
array([[1, 2, 1, 2], [3, 4, 3, 4]])
Simple Concatenation with pd.concat
¶
Pandas has a function, pd.concat()
, which has a similar syntax to np.concatenate
but contains a number of options that we'll discuss momentarily:
# Signature in Pandas v0.18
pd.concat(objs, axis=0, join='outer', join_axes=None, ignore_index=False,
keys=None, levels=None, names=None, verify_integrity=False,
copy=True)
pd.concat()
can be used for a simple concatenation of Series
or DataFrame
objects, just as np.concatenate()
can be used for simple concatenations of arrays:
ser1 = pd.Series(['A', 'B', 'C'], index=[1, 2, 3])
ser2 = pd.Series(['D', 'E', 'F'], index=[4, 5, 6])
pd.concat([ser1, ser2])
1 A 2 B 3 C 4 D 5 E 6 F dtype: object
It also works to concatenate higher-dimensional objects, such as DataFrame
s:
df1 = make_df('AB', [1, 2])
df2 = make_df('AB', [3, 4])
display('df1', 'df2', 'pd.concat([df1, df2])')
df1
A | B | |
---|---|---|
1 | A1 | B1 |
2 | A2 | B2 |
df2
A | B | |
---|---|---|
3 | A3 | B3 |
4 | A4 | B4 |
pd.concat([df1, df2])
A | B | |
---|---|---|
1 | A1 | B1 |
2 | A2 | B2 |
3 | A3 | B3 |
4 | A4 | B4 |
By default, the concatenation takes place row-wise within the DataFrame
(i.e., axis=0
). Like np.concatenate
, pd.concat
allows specification of an axis along which concatenation will take place. Consider the following example:
df3 = make_df('AB', [0, 1])
df4 = make_df('CD', [0, 1])
display('df3', 'df4', "pd.concat([df3, df4], axis='col')")
--------------------------------------------------------------------------- KeyError Traceback (most recent call last) File /opt/conda/lib/python3.10/site-packages/pandas/core/generic.py:554, in NDFrame._get_axis_number(cls, axis) 553 try: --> 554 return cls._AXIS_TO_AXIS_NUMBER[axis] 555 except KeyError: KeyError: 'col' During handling of the above exception, another exception occurred: ValueError Traceback (most recent call last) File /opt/conda/lib/python3.10/site-packages/IPython/core/formatters.py:706, in PlainTextFormatter.__call__(self, obj) 699 stream = StringIO() 700 printer = pretty.RepresentationPrinter(stream, self.verbose, 701 self.max_width, self.newline, 702 max_seq_length=self.max_seq_length, 703 singleton_pprinters=self.singleton_printers, 704 type_pprinters=self.type_printers, 705 deferred_pprinters=self.deferred_printers) --> 706 printer.pretty(obj) 707 printer.flush() 708 return stream.getvalue() File /opt/conda/lib/python3.10/site-packages/IPython/lib/pretty.py:410, in RepresentationPrinter.pretty(self, obj) 407 return meth(obj, self, cycle) 408 if cls is not object \ 409 and callable(cls.__dict__.get('__repr__')): --> 410 return _repr_pprint(obj, self, cycle) 412 return _default_pprint(obj, self, cycle) 413 finally: File /opt/conda/lib/python3.10/site-packages/IPython/lib/pretty.py:778, in _repr_pprint(obj, p, cycle) 776 """A pprint that just redirects to the normal repr function.""" 777 # Find newlines and replace them with p.break_() --> 778 output = repr(obj) 779 lines = output.splitlines() 780 with p.group(): Cell In[3], line 14, in display.__repr__(self) 13 def __repr__(self): ---> 14 return '\n\n'.join(a + '\n' + repr(eval(a)) 15 for a in self.args) Cell In[3], line 14, in <genexpr>(.0) 13 def __repr__(self): ---> 14 return '\n\n'.join(a + '\n' + repr(eval(a)) 15 for a in self.args) File <string>:1 File /opt/conda/lib/python3.10/site-packages/pandas/util/_decorators.py:331, in deprecate_nonkeyword_arguments.<locals>.decorate.<locals>.wrapper(*args, **kwargs) 325 if len(args) > num_allow_args: 326 warnings.warn( 327 msg.format(arguments=_format_argument_list(allow_args)), 328 FutureWarning, 329 stacklevel=find_stack_level(), 330 ) --> 331 return func(*args, **kwargs) File /opt/conda/lib/python3.10/site-packages/pandas/core/reshape/concat.py:368, in concat(objs, axis, join, ignore_index, keys, levels, names, verify_integrity, sort, copy) 146 @deprecate_nonkeyword_arguments(version=None, allowed_args=["objs"]) 147 def concat( 148 objs: Iterable[NDFrame] | Mapping[HashableT, NDFrame], (...) 157 copy: bool = True, 158 ) -> DataFrame | Series: 159 """ 160 Concatenate pandas objects along a particular axis. 161 (...) 366 1 3 4 367 """ --> 368 op = _Concatenator( 369 objs, 370 axis=axis, 371 ignore_index=ignore_index, 372 join=join, 373 keys=keys, 374 levels=levels, 375 names=names, 376 verify_integrity=verify_integrity, 377 copy=copy, 378 sort=sort, 379 ) 381 return op.get_result() File /opt/conda/lib/python3.10/site-packages/pandas/core/reshape/concat.py:496, in _Concatenator.__init__(self, objs, axis, join, keys, levels, names, ignore_index, verify_integrity, copy, sort) 494 axis = DataFrame._get_axis_number(axis) 495 else: --> 496 axis = sample._get_axis_number(axis) 498 # Need to flip BlockManager axis in the DataFrame special case 499 self._is_frame = isinstance(sample, ABCDataFrame) File /opt/conda/lib/python3.10/site-packages/pandas/core/generic.py:556, in NDFrame._get_axis_number(cls, axis) 554 return cls._AXIS_TO_AXIS_NUMBER[axis] 555 except KeyError: --> 556 raise ValueError(f"No axis named {axis} for object type {cls.__name__}") ValueError: No axis named col for object type DataFrame
--------------------------------------------------------------------------- KeyError Traceback (most recent call last) File /opt/conda/lib/python3.10/site-packages/pandas/core/generic.py:554, in NDFrame._get_axis_number(cls, axis) 553 try: --> 554 return cls._AXIS_TO_AXIS_NUMBER[axis] 555 except KeyError: KeyError: 'col' During handling of the above exception, another exception occurred: ValueError Traceback (most recent call last) File /opt/conda/lib/python3.10/site-packages/IPython/core/formatters.py:342, in BaseFormatter.__call__(self, obj) 340 method = get_real_method(obj, self.print_method) 341 if method is not None: --> 342 return method() 343 return None 344 else: Cell In[3], line 10, in display._repr_html_(self) 9 def _repr_html_(self): ---> 10 return '\n'.join(self.template.format(a, eval(a)._repr_html_()) 11 for a in self.args) Cell In[3], line 10, in <genexpr>(.0) 9 def _repr_html_(self): ---> 10 return '\n'.join(self.template.format(a, eval(a)._repr_html_()) 11 for a in self.args) File <string>:1 File /opt/conda/lib/python3.10/site-packages/pandas/util/_decorators.py:331, in deprecate_nonkeyword_arguments.<locals>.decorate.<locals>.wrapper(*args, **kwargs) 325 if len(args) > num_allow_args: 326 warnings.warn( 327 msg.format(arguments=_format_argument_list(allow_args)), 328 FutureWarning, 329 stacklevel=find_stack_level(), 330 ) --> 331 return func(*args, **kwargs) File /opt/conda/lib/python3.10/site-packages/pandas/core/reshape/concat.py:368, in concat(objs, axis, join, ignore_index, keys, levels, names, verify_integrity, sort, copy) 146 @deprecate_nonkeyword_arguments(version=None, allowed_args=["objs"]) 147 def concat( 148 objs: Iterable[NDFrame] | Mapping[HashableT, NDFrame], (...) 157 copy: bool = True, 158 ) -> DataFrame | Series: 159 """ 160 Concatenate pandas objects along a particular axis. 161 (...) 366 1 3 4 367 """ --> 368 op = _Concatenator( 369 objs, 370 axis=axis, 371 ignore_index=ignore_index, 372 join=join, 373 keys=keys, 374 levels=levels, 375 names=names, 376 verify_integrity=verify_integrity, 377 copy=copy, 378 sort=sort, 379 ) 381 return op.get_result() File /opt/conda/lib/python3.10/site-packages/pandas/core/reshape/concat.py:496, in _Concatenator.__init__(self, objs, axis, join, keys, levels, names, ignore_index, verify_integrity, copy, sort) 494 axis = DataFrame._get_axis_number(axis) 495 else: --> 496 axis = sample._get_axis_number(axis) 498 # Need to flip BlockManager axis in the DataFrame special case 499 self._is_frame = isinstance(sample, ABCDataFrame) File /opt/conda/lib/python3.10/site-packages/pandas/core/generic.py:556, in NDFrame._get_axis_number(cls, axis) 554 return cls._AXIS_TO_AXIS_NUMBER[axis] 555 except KeyError: --> 556 raise ValueError(f"No axis named {axis} for object type {cls.__name__}") ValueError: No axis named col for object type DataFrame
We could have equivalently specified axis=1
; here we've used the more intuitive axis='col'
.
Duplicate indices¶
One important difference between np.concatenate
and pd.concat
is that Pandas concatenation preserves indices, even if the result will have duplicate indices! Consider this simple example:
x = make_df('AB', [0, 1])
y = make_df('AB', [2, 3])
y.index = x.index # make duplicate indices!
display('x', 'y', 'pd.concat([x, y])')
x
A | B | |
---|---|---|
0 | A0 | B0 |
1 | A1 | B1 |
y
A | B | |
---|---|---|
0 | A2 | B2 |
1 | A3 | B3 |
pd.concat([x, y])
A | B | |
---|---|---|
0 | A0 | B0 |
1 | A1 | B1 |
0 | A2 | B2 |
1 | A3 | B3 |
Notice the repeated indices in the result. While this is valid within DataFrame
s, the outcome is often undesirable. pd.concat()
gives us a few ways to handle it.
Catching the repeats as an error¶
If you'd like to simply verify that the indices in the result of pd.concat()
do not overlap, you can specify the verify_integrity
flag. With this set to True, the concatenation will raise an exception if there are duplicate indices. Here is an example, where for clarity we'll catch and print the error message:
try:
pd.concat([x, y], verify_integrity=True)
except ValueError as e:
print("ValueError:", e)
ValueError: Indexes have overlapping values: Int64Index([0, 1], dtype='int64')
Ignoring the index¶
Sometimes the index itself does not matter, and you would prefer it to simply be ignored. This option can be specified using the ignore_index
flag. With this set to true, the concatenation will create a new integer index for the resulting Series
:
display('x', 'y', 'pd.concat([x, y], ignore_index=True)')
x
A | B | |
---|---|---|
0 | A0 | B0 |
1 | A1 | B1 |
y
A | B | |
---|---|---|
0 | A2 | B2 |
1 | A3 | B3 |
pd.concat([x, y], ignore_index=True)
A | B | |
---|---|---|
0 | A0 | B0 |
1 | A1 | B1 |
2 | A2 | B2 |
3 | A3 | B3 |
Adding MultiIndex keys¶
Another option is to use the keys
option to specify a label for the data sources; the result will be a hierarchically indexed series containing the data:
display('x', 'y', "pd.concat([x, y], keys=['x', 'y'])")
x
A | B | |
---|---|---|
0 | A0 | B0 |
1 | A1 | B1 |
y
A | B | |
---|---|---|
0 | A2 | B2 |
1 | A3 | B3 |
pd.concat([x, y], keys=['x', 'y'])
A | B | ||
---|---|---|---|
x | 0 | A0 | B0 |
1 | A1 | B1 | |
y | 0 | A2 | B2 |
1 | A3 | B3 |
The result is a multiply indexed DataFrame
, and we can use the tools discussed in Hierarchical Indexing to transform this data into the representation we're interested in.
Concatenation with joins¶
In the simple examples we just looked at, we were mainly concatenating DataFrame
s with shared column names. In practice, data from different sources might have different sets of column names, and pd.concat
offers several options in this case. Consider the concatenation of the following two DataFrame
s, which have some (but not all!) columns in common:
df5 = make_df('ABC', [1, 2])
df6 = make_df('BCD', [3, 4])
display('df5', 'df6', 'pd.concat([df5, df6])')
df5
A | B | C | |
---|---|---|---|
1 | A1 | B1 | C1 |
2 | A2 | B2 | C2 |
df6
B | C | D | |
---|---|---|---|
3 | B3 | C3 | D3 |
4 | B4 | C4 | D4 |
pd.concat([df5, df6])
A | B | C | D | |
---|---|---|---|---|
1 | A1 | B1 | C1 | NaN |
2 | A2 | B2 | C2 | NaN |
3 | NaN | B3 | C3 | D3 |
4 | NaN | B4 | C4 | D4 |
By default, the entries for which no data is available are filled with NA values. To change this, we can specify one of several options for the join
and join_axes
parameters of the concatenate function. By default, the join is a union of the input columns (join='outer'
), but we can change this to an intersection of the columns using join='inner'
:
display('df5', 'df6',
"pd.concat([df5, df6], join='inner')")
df5
A | B | C | |
---|---|---|---|
1 | A1 | B1 | C1 |
2 | A2 | B2 | C2 |
df6
B | C | D | |
---|---|---|---|
3 | B3 | C3 | D3 |
4 | B4 | C4 | D4 |
pd.concat([df5, df6], join='inner')
B | C | |
---|---|---|
1 | B1 | C1 |
2 | B2 | C2 |
3 | B3 | C3 |
4 | B4 | C4 |
Another option is to directly specify the index of the remaininig colums using the join_axes
argument, which takes a list of index objects. Here we'll specify that the returned columns should be the same as those of the first input:
display('df5', 'df6',
"pd.concat([df5, df6], join_axes=[df5.columns])")
--------------------------------------------------------------------------- TypeError Traceback (most recent call last) File /opt/conda/lib/python3.10/site-packages/IPython/core/formatters.py:706, in PlainTextFormatter.__call__(self, obj) 699 stream = StringIO() 700 printer = pretty.RepresentationPrinter(stream, self.verbose, 701 self.max_width, self.newline, 702 max_seq_length=self.max_seq_length, 703 singleton_pprinters=self.singleton_printers, 704 type_pprinters=self.type_printers, 705 deferred_pprinters=self.deferred_printers) --> 706 printer.pretty(obj) 707 printer.flush() 708 return stream.getvalue() File /opt/conda/lib/python3.10/site-packages/IPython/lib/pretty.py:410, in RepresentationPrinter.pretty(self, obj) 407 return meth(obj, self, cycle) 408 if cls is not object \ 409 and callable(cls.__dict__.get('__repr__')): --> 410 return _repr_pprint(obj, self, cycle) 412 return _default_pprint(obj, self, cycle) 413 finally: File /opt/conda/lib/python3.10/site-packages/IPython/lib/pretty.py:778, in _repr_pprint(obj, p, cycle) 776 """A pprint that just redirects to the normal repr function.""" 777 # Find newlines and replace them with p.break_() --> 778 output = repr(obj) 779 lines = output.splitlines() 780 with p.group(): Cell In[3], line 14, in display.__repr__(self) 13 def __repr__(self): ---> 14 return '\n\n'.join(a + '\n' + repr(eval(a)) 15 for a in self.args) Cell In[3], line 14, in <genexpr>(.0) 13 def __repr__(self): ---> 14 return '\n\n'.join(a + '\n' + repr(eval(a)) 15 for a in self.args) File <string>:1 File /opt/conda/lib/python3.10/site-packages/pandas/util/_decorators.py:331, in deprecate_nonkeyword_arguments.<locals>.decorate.<locals>.wrapper(*args, **kwargs) 325 if len(args) > num_allow_args: 326 warnings.warn( 327 msg.format(arguments=_format_argument_list(allow_args)), 328 FutureWarning, 329 stacklevel=find_stack_level(), 330 ) --> 331 return func(*args, **kwargs) TypeError: concat() got an unexpected keyword argument 'join_axes'
--------------------------------------------------------------------------- TypeError Traceback (most recent call last) File /opt/conda/lib/python3.10/site-packages/IPython/core/formatters.py:342, in BaseFormatter.__call__(self, obj) 340 method = get_real_method(obj, self.print_method) 341 if method is not None: --> 342 return method() 343 return None 344 else: Cell In[3], line 10, in display._repr_html_(self) 9 def _repr_html_(self): ---> 10 return '\n'.join(self.template.format(a, eval(a)._repr_html_()) 11 for a in self.args) Cell In[3], line 10, in <genexpr>(.0) 9 def _repr_html_(self): ---> 10 return '\n'.join(self.template.format(a, eval(a)._repr_html_()) 11 for a in self.args) File <string>:1 File /opt/conda/lib/python3.10/site-packages/pandas/util/_decorators.py:331, in deprecate_nonkeyword_arguments.<locals>.decorate.<locals>.wrapper(*args, **kwargs) 325 if len(args) > num_allow_args: 326 warnings.warn( 327 msg.format(arguments=_format_argument_list(allow_args)), 328 FutureWarning, 329 stacklevel=find_stack_level(), 330 ) --> 331 return func(*args, **kwargs) TypeError: concat() got an unexpected keyword argument 'join_axes'
The combination of options of the pd.concat
function allows a wide range of possible behaviors when joining two datasets; keep these in mind as you use these tools for your own data.
The append()
method¶
Because direct array concatenation is so common, Series
and DataFrame
objects have an append
method that can accomplish the same thing in fewer keystrokes. For example, rather than calling pd.concat([df1, df2])
, you can simply call df1.append(df2)
:
display('df1', 'df2', 'df1.append(df2)')
<string>:1: FutureWarning: The frame.append method is deprecated and will be removed from pandas in a future version. Use pandas.concat instead. <string>:1: FutureWarning: The frame.append method is deprecated and will be removed from pandas in a future version. Use pandas.concat instead.
df1
A | B | |
---|---|---|
1 | A1 | B1 |
2 | A2 | B2 |
df2
A | B | |
---|---|---|
3 | A3 | B3 |
4 | A4 | B4 |
df1.append(df2)
A | B | |
---|---|---|
1 | A1 | B1 |
2 | A2 | B2 |
3 | A3 | B3 |
4 | A4 | B4 |
Keep in mind that unlike the append()
and extend()
methods of Python lists, the append()
method in Pandas does not modify the original object–instead it creates a new object with the combined data. It also is not a very efficient method, because it involves creation of a new index and data buffer. Thus, if you plan to do multiple append
operations, it is generally better to build a list of DataFrame
s and pass them all at once to the concat()
function.
In the next section, we'll look at another more powerful approach to combining data from multiple sources, the database-style merges/joins implemented in pd.merge
. For more information on concat()
, append()
, and related functionality, see the "Merge, Join, and Concatenate" section of the Pandas documentation.