Working with text data — pandas 3.0.0rc0+33.g1fd184de2a documentation (original) (raw)

Text data types#

There are two ways to store text data in pandas:

  1. StringDtype extension type.
  2. NumPy object dtype.

We recommend using StringDtype to store text data via the aliasdtype="str" (the default when dtype of strings is inferred), see below for more details.

Prior to pandas 1.0, object dtype was the only option. This was unfortunate for many reasons:

  1. You can accidentally store a mixture of strings and non-strings in anobject dtype array. It’s better to have a dedicated dtype.
  2. object dtype breaks dtype-specific operations like DataFrame.select_dtypes(). There isn’t a clear way to select just text while excluding non-text but still object-dtype columns.
  3. When reading code, the contents of an object dtype array is less clear than 'string'.

When using StringDtype with PyArrow as the storage (see below), users will see large performance improvements in memory as well as time for certain operations when compared to object dtype arrays. When not using PyArrow as the storage, the performance of StringDtypeis about the same as that of object. We expect future enhancements to significantly increase the performance and lower the memory overhead ofStringDtype in this case.

Changed in version 3.0: The default when pandas infers the dtype of a collection of strings is to use dtype='str'. This will use np.nanas it’s NA value and be backed by a PyArrow string array when PyArrow is installed, or backed by NumPy object array when PyArrow is not installed.

In [1]: pd.Series(["a", "b", "c"]) Out[1]: 0 a 1 b 2 c dtype: str

Specifying StringDtype explicitly#

When it is desired to explicitly specify the dtype, we generally recommend using the alias dtype="str" if you desire to have np.nan as the NA value or the alias dtype="string" if you desire to have pd.NA as the NA value.

In [2]: pd.Series(["a", "b", None], dtype="str") Out[2]: 0 a 1 b 2 NaN dtype: str

In [3]: pd.Series(["a", "b", None], dtype="string") Out[3]: 0 a 1 b 2 dtype: string

Specifying either alias will also convert non-string data to strings:

In [4]: s = pd.Series(["a", 2, np.nan], dtype="str")

In [5]: s Out[5]: 0 a 1 2 2 NaN dtype: str

In [6]: type(s[1]) Out[6]: str

or convert from existing pandas data:

In [7]: s1 = pd.Series([1, 2, pd.NA], dtype="Int64")

In [8]: s1 Out[8]: 0 1 1 2 2 dtype: Int64

In [9]: s2 = s1.astype("string")

In [10]: s2 Out[10]: 0 1 1 2 2 dtype: string

In [11]: type(s2[0]) Out[11]: str

However there are four distinct StringDtype variants that may be utilized. See The four StringDtype variants section below for details.

String methods#

Series and Index are equipped with a set of string processing methods that make it easy to operate on each element of the array. Perhaps most importantly, these methods exclude missing/NA values automatically. These are accessed via the str attribute and generally have names matching the equivalent (scalar) built-in string methods:

In [12]: s = pd.Series( ....: ["A", "B", "C", "Aaba", np.nan, "dog", "cat"], ....: dtype="str", ....: ) ....:

In [13]: s.str.lower() Out[13]: 0 a 1 b 2 c 3 aaba 4 NaN 5 dog 6 cat dtype: str

In [14]: s.str.upper() Out[14]: 0 A 1 B 2 C 3 AABA 4 NaN 5 DOG 6 CAT dtype: str

In [15]: s.str.len() Out[15]: 0 1.0 1 1.0 2 1.0 3 4.0 4 NaN 5 3.0 6 3.0 dtype: float64

In [16]: idx = pd.Index([" jack", "jill ", " jesse ", "frank"])

In [17]: idx.str.strip() Out[17]: Index(['jack', 'jill', 'jesse', 'frank'], dtype='str')

In [18]: idx.str.lstrip() Out[18]: Index(['jack', 'jill ', 'jesse ', 'frank'], dtype='str')

In [19]: idx.str.rstrip() Out[19]: Index([' jack', 'jill', ' jesse', 'frank'], dtype='str')

The string methods on Index are especially useful for cleaning up or transforming DataFrame columns. For instance, you may have columns with leading or trailing whitespace:

In [20]: df = pd.DataFrame( ....: np.random.randn(3, 2), ....: columns=[" Column A ", " Column B "], ....: index=range(3), ....: ) ....:

In [21]: df Out[21]: Column A Column B 0 0.469112 -0.282863 1 -1.509059 -1.135632 2 1.212112 -0.173215

Since df.columns is an Index object, we can use the .str accessor

In [22]: df.columns.str.strip() Out[22]: Index(['Column A', 'Column B'], dtype='str')

In [23]: df.columns.str.lower() Out[23]: Index([' column a ', ' column b '], dtype='str')

These string methods can then be used to clean up the columns as needed. Here we are removing leading and trailing whitespaces, lower casing all names, and replacing any remaining whitespaces with underscores:

In [24]: df.columns = df.columns.str.strip().str.lower().str.replace(" ", "_")

In [25]: df Out[25]: column_a column_b 0 0.469112 -0.282863 1 -1.509059 -1.135632 2 1.212112 -0.173215

Note

If you have a Series where lots of elements are repeated (i.e. the number of unique elements in the Series is a lot smaller than the length of theSeries), it can be faster to convert the original Series to one of typecategory and then use .str.<method> or .dt.<property> on that. The performance difference comes from the fact that, for Series of type category, the string operations are done on the .categories and not on each element of theSeries.

Please note that a Series of type category with string .categories has some limitations in comparison to Series of type string (e.g. you can’t add strings to each other: s + " " + s won’t work if s is a Series of type category). Also,.str methods which operate on elements of type list are not available on such aSeries.

Warning

The type of the Series is inferred and is one among the allowed types (i.e. strings).

Generally speaking, the .str accessor is intended to work only on strings. With very few exceptions, other uses are not supported, and may be disabled at a later point.

Splitting and replacing strings#

Methods like split return a Series of lists:

In [26]: s2 = pd.Series(["a_b_c", "c_d_e", np.nan, "f_g_h"], dtype="str")

In [27]: s2.str.split("_") Out[27]: 0 [a, b, c] 1 [c, d, e] 2 NaN 3 [f, g, h] dtype: object

Elements in the split lists can be accessed using get or [] notation:

In [28]: s2.str.split("_").str.get(1) Out[28]: 0 b 1 d 2 NaN 3 g dtype: object

In [29]: s2.str.split("_").str[1] Out[29]: 0 b 1 d 2 NaN 3 g dtype: object

It is easy to expand this to return a DataFrame using expand.

In [30]: s2.str.split("_", expand=True) Out[30]: 0 1 2 0 a b c 1 c d e 2 NaN NaN NaN 3 f g h

When original Series has StringDtype, the output columns will all be StringDtype as well.

It is also possible to limit the number of splits:

In [31]: s2.str.split("_", expand=True, n=1) Out[31]: 0 1 0 a b_c 1 c d_e 2 NaN NaN 3 f g_h

rsplit is similar to split except it works in the reverse direction, i.e., from the end of the string to the beginning of the string:

In [32]: s2.str.rsplit("_", expand=True, n=1) Out[32]: 0 1 0 a_b c 1 c_d e 2 NaN NaN 3 f_g h

replace optionally uses regular expressions:

In [33]: s3 = pd.Series( ....: ["A", "B", "C", "Aaba", "Baca", "", np.nan, "CABA", "dog", "cat"], ....: dtype="str", ....: ) ....:

In [34]: s3 Out[34]: 0 A 1 B 2 C 3 Aaba 4 Baca 5
6 NaN 7 CABA 8 dog 9 cat dtype: str

In [35]: s3.str.replace("^.a|dog", "XX-XX ", case=False, regex=True) Out[35]: 0 A 1 B 2 C 3 XX-XX ba 4 XX-XX ca 5
6 NaN 7 XX-XX BA 8 XX-XX 9 XX-XX t dtype: str

Changed in version 2.0: Single character pattern with regex=True will also be treated as regular expressions:

In [36]: s4 = pd.Series(["a.b", ".", "b", np.nan, ""], dtype="str")

In [37]: s4 Out[37]: 0 a.b 1 . 2 b 3 NaN 4
dtype: str

In [38]: s4.str.replace(".", "a", regex=True) Out[38]: 0 aaa 1 a 2 a 3 NaN 4
dtype: str

If you want literal replacement of a string (equivalent to str.replace()), you can set the optional regex parameter to False, rather than escaping each character. In this case both pat and repl must be strings:

In [39]: dollars = pd.Series(["12", "-$10", "$10,000"], dtype="str")

These lines are equivalent

In [40]: dollars.str.replace(r"-$", "-", regex=True) Out[40]: 0 12 1 -10 2 $10,000 dtype: str

In [41]: dollars.str.replace("-$", "-", regex=False) Out[41]: 0 12 1 -10 2 $10,000 dtype: str

The replace method can also take a callable as replacement. It is called on every pat using re.sub(). The callable should expect one positional argument (a regex object) and return a string.

Reverse every lowercase alphabetic word

In [42]: pat = r"[a-z]+"

In [43]: def repl(m): ....: return m.group(0)[::-1] ....:

In [44]: pd.Series(["foo 123", "bar baz", np.nan], dtype="str").str.replace( ....: pat, repl, regex=True ....: ) ....: Out[44]: 0 oof 123 1 rab zab 2 NaN dtype: str

Using regex groups

In [45]: pat = r"(?P\w+) (?P\w+) (?P\w+)"

In [46]: def repl(m): ....: return m.group("two").swapcase() ....:

In [47]: pd.Series(["Foo Bar Baz", np.nan], dtype="str").str.replace( ....: pat, repl, regex=True ....: ) ....: Out[47]: 0 bAR 1 NaN dtype: str

The replace method also accepts a compiled regular expression object from re.compile() as a pattern. All flags should be included in the compiled regular expression object.

In [48]: import re

In [49]: regex_pat = re.compile(r"^.a|dog", flags=re.IGNORECASE)

In [50]: s3.str.replace(regex_pat, "XX-XX ", regex=True) Out[50]: 0 A 1 B 2 C 3 XX-XX ba 4 XX-XX ca 5
6 NaN 7 XX-XX BA 8 XX-XX 9 XX-XX t dtype: str

Including a flags argument when calling replace with a compiled regular expression object will raise a ValueError.

In [51]: s3.str.replace(regex_pat, 'XX-XX ', flags=re.IGNORECASE)

ValueError: case and flags cannot be set when pat is a compiled regex

removeprefix and removesuffix have the same effect as str.removeprefix and str.removesuffix added inPython 3.9:

In [52]: s = pd.Series(["str_foo", "str_bar", "no_prefix"])

In [53]: s.str.removeprefix("str_") Out[53]: 0 foo 1 bar 2 no_prefix dtype: str

In [54]: s = pd.Series(["foo_str", "bar_str", "no_suffix"])

In [55]: s.str.removesuffix("_str") Out[55]: 0 foo 1 bar 2 no_suffix dtype: str

Concatenation#

There are several ways to concatenate a Series or Index, either with itself or others, all based on cat(), resp. Index.str.cat.

Concatenating a single Series into a string#

The content of a Series (or Index) can be concatenated:

In [56]: s = pd.Series(["a", "b", "c", "d"], dtype="str")

In [57]: s.str.cat(sep=",") Out[57]: 'a,b,c,d'

If not specified, the keyword sep for the separator defaults to the empty string, sep='':

In [58]: s.str.cat() Out[58]: 'abcd'

By default, missing values are ignored. Using na_rep, they can be given a representation:

In [59]: t = pd.Series(["a", "b", np.nan, "d"], dtype="str")

In [60]: t.str.cat(sep=",") Out[60]: 'a,b,d'

In [61]: t.str.cat(sep=",", na_rep="-") Out[61]: 'a,b,-,d'

Concatenating a Series and something list-like into a Series#

The first argument to cat() can be a list-like object, provided that it matches the length of the calling Series (or Index).

In [62]: s.str.cat(["A", "B", "C", "D"]) Out[62]: 0 aA 1 bB 2 cC 3 dD dtype: str

Missing values on either side will result in missing values in the result as well, unless na_rep is specified:

In [63]: s.str.cat(t) Out[63]: 0 aa 1 bb 2 NaN 3 dd dtype: str

In [64]: s.str.cat(t, na_rep="-") Out[64]: 0 aa 1 bb 2 c- 3 dd dtype: str

Concatenating a Series and something array-like into a Series#

The parameter others can also be two-dimensional. In this case, the number or rows must match the lengths of the calling Series (or Index).

In [65]: d = pd.concat([t, s], axis=1)

In [66]: s Out[66]: 0 a 1 b 2 c 3 d dtype: str

In [67]: d Out[67]: 0 1 0 a a 1 b b 2 NaN c 3 d d

In [68]: s.str.cat(d, na_rep="-") Out[68]: 0 aaa 1 bbb 2 c-c 3 ddd dtype: str

Concatenating a Series and an indexed object into a Series, with alignment#

For concatenation with a Series or DataFrame, it is possible to align the indexes before concatenation by setting the join-keyword.

In [69]: u = pd.Series(["b", "d", "a", "c"], index=[1, 3, 0, 2], dtype="str")

In [70]: s Out[70]: 0 a 1 b 2 c 3 d dtype: str

In [71]: u Out[71]: 1 b 3 d 0 a 2 c dtype: str

In [72]: s.str.cat(u) Out[72]: 0 aa 1 bb 2 cc 3 dd dtype: str

In [73]: s.str.cat(u, join="left") Out[73]: 0 aa 1 bb 2 cc 3 dd dtype: str

The usual options are available for join (one of 'left', 'outer', 'inner', 'right'). In particular, alignment also means that the different lengths do not need to coincide anymore.

In [74]: v = pd.Series(["z", "a", "b", "d", "e"], index=[-1, 0, 1, 3, 4], dtype="str")

In [75]: s Out[75]: 0 a 1 b 2 c 3 d dtype: str

In [76]: v Out[76]: -1 z 0 a 1 b 3 d 4 e dtype: str

In [77]: s.str.cat(v, join="left", na_rep="-") Out[77]: 0 aa 1 bb 2 c- 3 dd dtype: str

In [78]: s.str.cat(v, join="outer", na_rep="-") Out[78]: -1 -z 0 aa 1 bb 2 c- 3 dd 4 -e dtype: str

The same alignment can be used when others is a DataFrame:

In [79]: f = d.loc[[3, 2, 1, 0], :]

In [80]: s Out[80]: 0 a 1 b 2 c 3 d dtype: str

In [81]: f Out[81]: 0 1 3 d d 2 NaN c 1 b b 0 a a

In [82]: s.str.cat(f, join="left", na_rep="-") Out[82]: 0 aaa 1 bbb 2 c-c 3 ddd dtype: str

Concatenating a Series and many objects into a Series#

Several array-like items (specifically: Series, Index, and 1-dimensional variants of np.ndarray) can be combined in a list-like container (including iterators, dict-views, etc.).

In [83]: s Out[83]: 0 a 1 b 2 c 3 d dtype: str

In [84]: u Out[84]: 1 b 3 d 0 a 2 c dtype: str

In [85]: s.str.cat([u, u.to_numpy()], join="left") Out[85]: 0 aab 1 bbd 2 cca 3 ddc dtype: str

All elements without an index (e.g. np.ndarray) within the passed list-like must match in length to the calling Series (or Index), but Series and Index may have arbitrary length (as long as alignment is not disabled with join=None):

In [86]: v Out[86]: -1 z 0 a 1 b 3 d 4 e dtype: str

In [87]: s.str.cat([v, u, u.to_numpy()], join="outer", na_rep="-") Out[87]: -1 -z-- 0 aaab 1 bbbd 2 c-ca 3 dddc 4 -e-- dtype: str

If using join='right' on a list-like of others that contains different indexes, the union of these indexes will be used as the basis for the final concatenation:

In [88]: u.loc[[3]] Out[88]: 3 d dtype: str

In [89]: v.loc[[-1, 0]] Out[89]: -1 z 0 a dtype: str

In [90]: s.str.cat([u.loc[[3]], v.loc[[-1, 0]]], join="right", na_rep="-") Out[90]: 3 dd- -1 --z 0 a-a dtype: str

Indexing with .str#

You can use [] notation to directly index by position locations. If you index past the end of the string, the result will be a NaN.

In [91]: s = pd.Series( ....: ["A", "B", "C", "Aaba", "Baca", np.nan, "CABA", "dog", "cat"], dtype="str" ....: ) ....:

In [92]: s.str[0] Out[92]: 0 A 1 B 2 C 3 A 4 B 5 NaN 6 C 7 d 8 c dtype: str

In [93]: s.str[1] Out[93]: 0 NaN 1 NaN 2 NaN 3 a 4 a 5 NaN 6 A 7 o 8 a dtype: str

Testing for strings that match or contain a pattern#

You can check whether elements contain a pattern:

In [119]: pattern = r"[0-9][a-z]"

In [120]: pd.Series( .....: ["1", "2", "3a", "3b", "03c", "4dx"], .....: dtype="str", .....: ).str.contains(pattern) .....: Out[120]: 0 False 1 False 2 True 3 True 4 True 5 True dtype: bool

Or whether elements match a pattern:

In [121]: pd.Series( .....: ["1", "2", "3a", "3b", "03c", "4dx"], .....: dtype="str", .....: ).str.match(pattern) .....: Out[121]: 0 False 1 False 2 True 3 True 4 False 5 True dtype: bool

In [122]: pd.Series( .....: ["1", "2", "3a", "3b", "03c", "4dx"], .....: dtype="str", .....: ).str.fullmatch(pattern) .....: Out[122]: 0 False 1 False 2 True 3 True 4 False 5 False dtype: bool

Note

The distinction between match, fullmatch, and contains is strictness:fullmatch tests whether the entire string matches the regular expression;match tests whether there is a match of the regular expression that begins at the first character of the string; and contains tests whether there is a match of the regular expression at any position within the string.

The corresponding functions in the re package for these three match modes arere.fullmatch,re.match, andre.search, respectively.

Methods like match, fullmatch, contains, startswith, andendswith take an extra na argument so missing values can be considered True or False:

In [123]: s4 = pd.Series( .....: ["A", "B", "C", "Aaba", "Baca", np.nan, "CABA", "dog", "cat"], dtype="str" .....: ) .....:

In [124]: s4.str.contains("A", na=False) Out[124]: 0 True 1 False 2 False 3 True 4 False 5 False 6 True 7 False 8 False dtype: bool

Creating indicator variables#

You can extract dummy variables from string columns. For example if they are separated by a '|':

In [125]: s = pd.Series(["a", "a|b", np.nan, "a|c"], dtype="str")

In [126]: s.str.get_dummies(sep="|") Out[126]: a b c 0 1 0 0 1 1 1 0 2 0 0 0 3 1 0 1

String Index also supports get_dummies which returns a MultiIndex.

In [127]: idx = pd.Index(["a", "a|b", np.nan, "a|c"])

In [128]: idx.str.get_dummies(sep="|") Out[128]: MultiIndex([(1, 0, 0), (1, 1, 0), (0, 0, 0), (1, 0, 1)], names=['a', 'b', 'c'])

See also get_dummies().

Behavior differences#

Differences in behavior will be primarily due to the kind of NA value.

StringDtype with np.nan NA values#

  1. Like dtype="object", string accessor methodsthat return integer output will return a NumPy array that is either dtype int or float depending on the presence of NA values. Methods returning boolean output will return a NumPy array this is dtype bool, with the value False when an NA value is encountered.
    In [129]: s = pd.Series(["a", None, "b"], dtype="str")
    In [130]: s
    Out[130]:
    0 a

1 NaN
2 b
dtype: str
In [131]: s.str.count("a")
Out[131]:
0 1.0
1 NaN
2 0.0
dtype: float64
In [132]: s.dropna().str.count("a")
Out[132]:
0 1
2 0
dtype: int64
When NA values are present, the output dtype is float64. Howeverboolean output results in False for the NA values.
In [133]: s.str.isdigit()
Out[133]:
0 False
1 False
2 False
dtype: bool
In [134]: s.str.match("a")
Out[134]:
0 True
1 False
2 False
dtype: bool 2. Some string methods, like Series.str.decode(), are not available because the underlying array can only contain strings, not bytes. 3. Comparison operations will return a NumPy array with dtype bool. Missing values will always compare as unequal just as np.nan does.

StringDtype with pd.NA NA values#

  1. String accessor methodsthat return integer output will always return a nullable integer dtype, rather than either int or float dtype (depending on the presence of NA values). Methods returning boolean output will return a nullable boolean dtype.
    In [135]: s = pd.Series(["a", None, "b"], dtype="string")
    In [136]: s
    Out[136]:
    0 a

1
2 b
dtype: string
In [137]: s.str.count("a")
Out[137]:
0 1
1
2 0
dtype: Int64
In [138]: s.dropna().str.count("a")
Out[138]:
0 1
2 0
dtype: Int64
Both outputs are Int64 dtype. Similarly for methods returning boolean values.
In [139]: s.str.isdigit()
Out[139]:
0 False
1
2 False
dtype: boolean
In [140]: s.str.match("a")
Out[140]:
0 True
1
2 False
dtype: boolean 2. Some string methods, like Series.str.decode() because the underlying array can only contain strings, not bytes. 3. Comparison operations will return an object with BooleanDtype, rather than a bool dtype object. Missing values will propagate in comparison operations, rather than always comparing unequal like numpy.nan.

Important

Everything else that follows in the rest of this document applies equally to'str', 'string', and object dtype.

The four StringDtype variants#

There are four StringDtype variants that are available to users.

Python storage with np.nan values#

Note

This is the same as dtype='str' when PyArrow is not installed.

The implementation uses a NumPy object array, which directly stores the Python string objects, hence why the storage here is called 'python'. NA values in this array are represented and behave as np.nan.

In [141]: pd.Series( .....: ["a", "b", None, np.nan, pd.NA], .....: dtype=pd.StringDtype(storage="python", na_value=np.nan) .....: ) .....: Out[141]: 0 a 1 b 2 NaN 3 NaN 4 NaN dtype: str

Notice that the last three values are all inferred by pandas as being an NA values, and hence stored as np.nan.

PyArrow storage with np.nan values#

Note

This is the same as dtype='str' when PyArrow is installed.

The implementation uses a PyArrow array, however NA values in this array are represented and behave as np.nan.

In [142]: pd.Series( .....: ["a", "b", None, np.nan, pd.NA], .....: dtype=pd.StringDtype(storage="pyarrow", na_value=np.nan) .....: ) .....: Out[142]: 0 a 1 b 2 NaN 3 NaN 4 NaN dtype: str

Notice that the last three values are all inferred by pandas as being an NA values, and hence stored as np.nan.

Python storage with pd.NA values#

Note

This is the same as dtype='string' when PyArrow is not installed.

The implementation uses a NumPy object array, which directly stores the Python string objects, hence why the storage here is called 'python'. NA values in this array are represented and behave as np.nan.

In [143]: pd.Series( .....: ["a", "b", None, np.nan, pd.NA], .....: dtype=pd.StringDtype(storage="python", na_value=pd.NA) .....: ) .....: Out[143]: 0 a 1 b 2 3 4 dtype: string

Notice that the last three values are all inferred by pandas as being an NA values, and hence stored as pd.NA.

PyArrow storage with pd.NA values#

Note

This is the same as dtype='string' when PyArrow is installed.

The implementation uses a PyArrow array. NA values in this array are represented and behave as pd.NA.

In [144]: pd.Series( .....: ["a", "b", None, np.nan, pd.NA], .....: dtype=pd.StringDtype(storage="python", na_value=pd.NA) .....: ) .....: Out[144]: 0 a 1 b 2 3 4 dtype: string

Notice that the last three values are all inferred by pandas as being an NA values, and hence stored as pd.NA.

Method summary#