pandas.read_sql_query — pandas 2.2.3 documentation (original) (raw)

pandas.read_sql_query(sql, con, index_col=None, coerce_float=True, params=None, parse_dates=None, chunksize=None, dtype=None, dtype_backend=<no_default>)[source]#

Read SQL query into a DataFrame.

Returns a DataFrame corresponding to the result set of the query string. Optionally provide an index_col parameter to use one of the columns as the index, otherwise default integer index will be used.

Parameters:

sqlstr SQL query or SQLAlchemy Selectable (select or text object)

SQL query to be executed.

conSQLAlchemy connectable, str, or sqlite3 connection

Using SQLAlchemy makes it possible to use any DB supported by that library. If a DBAPI2 object, only sqlite3 is supported.

index_colstr or list of str, optional, default: None

Column(s) to set as index(MultiIndex).

coerce_floatbool, default True

Attempts to convert values of non-string, non-numeric objects (like decimal.Decimal) to floating point. Useful for SQL result sets.

paramslist, tuple or mapping, optional, default: None

List of parameters to pass to execute method. The syntax used to pass parameters is database driver dependent. Check your database driver documentation for which of the five syntax styles, described in PEP 249’s paramstyle, is supported. Eg. for psycopg2, uses %(name)s so use params={‘name’ : ‘value’}.

parse_dateslist or dict, default: None

chunksizeint, default None

If specified, return an iterator where chunksize is the number of rows to include in each chunk.

dtypeType name or dict of columns

Data type for data or columns. E.g. np.float64 or {‘a’: np.float64, ‘b’: np.int32, ‘c’: ‘Int64’}.

Added in version 1.3.0.

dtype_backend{‘numpy_nullable’, ‘pyarrow’}, default ‘numpy_nullable’

Back-end data type applied to the resultant DataFrame(still experimental). Behaviour is as follows:

Added in version 2.0.

Returns:

DataFrame or Iterator[DataFrame]

See also

read_sql_table

Read SQL database table into a DataFrame.

read_sql

Read SQL query or database table into a DataFrame.

Notes

Any datetime values with time zone information parsed via the parse_datesparameter will be converted to UTC.

Examples

from sqlalchemy import create_engine
engine = create_engine("sqlite:///database.db")
with engine.connect() as conn, conn.begin():
... data = pd.read_sql_table("data", conn)