pd.read_json() doesn't use utf-8 for a default encoding · Issue #29565 · pandas-dev/pandas (original) (raw)

Code Sample, a copy-pastable example if possible

if locale.getpreferredencoding() != 'UTF-8'

import pandas as pd

with open('test.json', 'w', encoding='utf-8') as f: f.write('{"A": ["АБВГДабвгд가"]}') dt2 = pd.read_json('test.json') print(dt2)

if locale.getpreferredencoding() == 'UTF-8'

import pandas as pd from unittest import mock

with open('test.json', 'w', encoding='utf-8') as f: f.write('{"A": ["АБВГДабвгд가"]}') with mock.patch('_bootlocale.getpreferredencoding', return_value='cp949'): dt2 = pd.read_json('test.json') print(dt2)

Problem description

According to the docs, when encoding parameter is not given, read_json() uses utf-8 for a default encoding.

However, when read_json() is called without encoding parameter, it calls built-in open() method to open a file and open() uses return value of locale.getpreferredencoding() to determine the encoding which can be something not utf-8 (My test environment was cp949 in Windows10/Korean).

Expected Output

Output of pd.show_versions()

INSTALLED VERSIONS

commit : None
python : 3.6.8.final.0
python-bits : 64
OS : Linux
OS-release : 4.15.0-66-generic
machine : x86_64
processor : x86_64
byteorder : little
LC_ALL : None
LANG : en_US.UTF-8
LOCALE : en_US.UTF-8

pandas : 0.25.3
numpy : 1.17.4
pytz : 2019.3
dateutil : 2.8.1
pip : 9.0.1
setuptools : 39.0.1
Cython : None
pytest : None
hypothesis : None
sphinx : None
blosc : None
feather : None
xlsxwriter : None
lxml.etree : None
html5lib : 0.999999999
pymysql : 0.9.3
psycopg2 : None
jinja2 : None
IPython : None
pandas_datareader: None
bs4 : None
bottleneck : None
fastparquet : None
gcsfs : None
lxml.etree : None
matplotlib : None
numexpr : None
odfpy : None
openpyxl : None
pandas_gbq : None
pyarrow : None
pytables : None
s3fs : None
scipy : None
sqlalchemy : None
tables : None
xarray : None
xlrd : None
xlwt : None
xlsxwriter : None