What’s new in 0.24.0 (January 25, 2019)

Warning

The 0.24.x series of releases will be the last to support Python 2. Future feature releases will support Python 3 only. See Plan for dropping Python 2.7open in new window for more details.

This is a major release from 0.23.4 and includes a number of API changes, new features, enhancements, and performance improvements along with a large number of bug fixes.

Highlights include:

Check the API Changes and deprecations before updating.

These are the changes in pandas 0.24.0. See Release Notes for a full changelog including other versions of pandas.

Enhancements

Optional integer NA support

Pandas has gained the ability to hold integer dtypes with missing values. This long requested feature is enabled through the use of extension typesopen in new window.

Note

IntegerArray is currently experimental. Its API or implementation may change without warning.

We can construct a Series with the specified dtype. The dtype string Int64 is a pandas ExtensionDtype. Specifying a list or array using the traditional missing value marker of np.nan will infer to integer dtype. The display of the Series will also use the NaN to indicate missing values in string outputs. (GH20700open in new window, GH20747open in new window, GH22441open in new window, GH21789open in new window, GH22346open in new window)

In [1]: s = pd.Series([1, 2, np.nan], dtype='Int64')

In [2]: s
Out[2]: 
0      1
1      2
2    NaN
Length: 3, dtype: Int64

Operations on these dtypes will propagate NaN as other pandas operations.

# arithmetic
In [3]: s + 1
Out[3]: 
0      2
1      3
2    NaN
Length: 3, dtype: Int64

# comparison
In [4]: s == 1
Out[4]: 
0     True
1    False
2    False
Length: 3, dtype: bool

# indexing
In [5]: s.iloc[1:3]
Out[5]: 
1      2
2    NaN
Length: 2, dtype: Int64

# operate with other dtypes
In [6]: s + s.iloc[1:3].astype('Int8')
Out[6]: 
0    NaN
1      4
2    NaN
Length: 3, dtype: Int64

# coerce when needed
In [7]: s + 0.01
Out[7]: 
0    1.01
1    2.01
2     NaN
Length: 3, dtype: float64

These dtypes can operate as part of a DataFrame.

In [8]: df = pd.DataFrame({'A': s, 'B': [1, 1, 3], 'C': list('aab')})

In [9]: df
Out[9]: 
     A  B  C
0    1  1  a
1    2  1  a
2  NaN  3  b

[3 rows x 3 columns]

In [10]: df.dtypes
Out[10]: 
A     Int64
B     int64
C    object
Length: 3, dtype: object

These dtypes can be merged, reshaped, and casted.

In [11]: pd.concat([df[['A']], df[['B', 'C']]], axis=1).dtypes
Out[11]: 
A     Int64
B     int64
C    object
Length: 3, dtype: object

In [12]: df['A'].astype(float)
Out[12]: 
0    1.0
1    2.0
2    NaN
Name: A, Length: 3, dtype: float64

Reduction and groupby operations such as sum work.

In [13]: df.sum()
Out[13]: 
A      3
B      5
C    aab
Length: 3, dtype: object

In [14]: df.groupby('B').A.sum()
Out[14]: 
B
1    3
3    0
Name: A, Length: 2, dtype: Int64

Warning

The Integer NA support currently uses the capitalized dtype version, e.g. Int8 as compared to the traditional int8. This may be changed at a future date.

See Nullable integer data typeopen in new window for more.

Accessing the values in a Series or Index

Series.arrayopen in new window and Index.arrayopen in new window have been added for extracting the array backing a Series or Index. (GH19954open in new window, GH23623open in new window)

In [15]: idx = pd.period_range('2000', periods=4)

In [16]: idx.array
Out[16]: 
<PeriodArray>
['2000-01-01', '2000-01-02', '2000-01-03', '2000-01-04']
Length: 4, dtype: period[D]

In [17]: pd.Series(idx).array
Out[17]: 
<PeriodArray>
['2000-01-01', '2000-01-02', '2000-01-03', '2000-01-04']
Length: 4, dtype: period[D]

Historically, this would have been done with series.values, but with .values it was unclear whether the returned value would be the actual array, some transformation of it, or one of pandas custom arrays (like Categorical). For example, with PeriodIndexopen in new window, .values generates a new ndarray of period objects each time.

In [18]: idx.values
Out[18]: 
array([Period('2000-01-01', 'D'), Period('2000-01-02', 'D'),
       Period('2000-01-03', 'D'), Period('2000-01-04', 'D')], dtype=object)

In [19]: id(idx.values)
Out[19]: 140075437660400

In [20]: id(idx.values)
Out[20]: 140075029063200

If you need an actual NumPy array, use Series.to_numpy()open in new window or Index.to_numpy()open in new window.

In [21]: idx.to_numpy()
Out[21]: 
array([Period('2000-01-01', 'D'), Period('2000-01-02', 'D'),
       Period('2000-01-03', 'D'), Period('2000-01-04', 'D')], dtype=object)

In [22]: pd.Series(idx).to_numpy()
Out[22]: 
array([Period('2000-01-01', 'D'), Period('2000-01-02', 'D'),
       Period('2000-01-03', 'D'), Period('2000-01-04', 'D')], dtype=object)

For Series and Indexes backed by normal NumPy arrays, Series.arrayopen in new window will return a new arrays.PandasArrayopen in new window, which is a thin (no-copy) wrapper around a numpy.ndarrayopen in new window. PandasArrayopen in new window isn’t especially useful on its own, but it does provide the same interface as any extension array defined in pandas or by a third-party library.

In [23]: ser = pd.Series([1, 2, 3])

In [24]: ser.array
Out[24]: 
<PandasArray>
[1, 2, 3]
Length: 3, dtype: int64

In [25]: ser.to_numpy()
Out[25]: array([1, 2, 3])

We haven’t removed or deprecated Series.valuesopen in new window or DataFrame.valuesopen in new window, but we highly recommend and using .array or .to_numpy() instead.

See Dtypesopen in new window and Attributes and Underlying Dataopen in new window for more.

pandas.array: a new top-level method for creating arrays

A new top-level method array()open in new window has been added for creating 1-dimensional arrays (GH22860open in new window). This can be used to create any extension arrayopen in new window, including extension arrays registered by 3rd party librariesopen in new window. See the dtypes docsopen in new window for more on extension arrays.

In [26]: pd.array([1, 2, np.nan], dtype='Int64')
Out[26]: 
<IntegerArray>
[1, 2, NaN]
Length: 3, dtype: Int64

In [27]: pd.array(['a', 'b', 'c'], dtype='category')
Out[27]: 
[a, b, c]
Categories (3, object): [a, b, c]

Passing data for which there isn’t dedicated extension type (e.g. float, integer, etc.) will return a new arrays.PandasArrayopen in new window, which is just a thin (no-copy) wrapper around a numpy.ndarrayopen in new window that satisfies the pandas extension array interface.

In [28]: pd.array([1, 2, 3])
Out[28]: 
<PandasArray>
[1, 2, 3]
Length: 3, dtype: int64

On their own, a PandasArrayopen in new window isn’t a very useful object. But if you need write low-level code that works generically for any ExtensionArrayopen in new window, PandasArrayopen in new window satisfies that need.

Notice that by default, if no dtype is specified, the dtype of the returned array is inferred from the data. In particular, note that the first example of [1, 2, np.nan] would have returned a floating-point array, since NaN is a float.

In [29]: pd.array([1, 2, np.nan])
Out[29]: 
<PandasArray>
[1.0, 2.0, nan]
Length: 3, dtype: float64

Storing Interval and Period data in Series and DataFrame

Intervalopen in new window and Periodopen in new window data may now be stored in a Seriesopen in new window or DataFrameopen in new window, in addition to an IntervalIndexopen in new window and PeriodIndexopen in new window like previously (GH19453open in new window, GH22862open in new window).

In [30]: ser = pd.Series(pd.interval_range(0, 5))

In [31]: ser
Out[31]: 
0    (0, 1]
1    (1, 2]
2    (2, 3]
3    (3, 4]
4    (4, 5]
Length: 5, dtype: interval

In [32]: ser.dtype
Out[32]: interval[int64]

For periods:

In [33]: pser = pd.Series(pd.period_range("2000", freq="D", periods=5))

In [34]: pser
Out[34]: 
0    2000-01-01
1    2000-01-02
2    2000-01-03
3    2000-01-04
4    2000-01-05
Length: 5, dtype: period[D]

In [35]: pser.dtype
Out[35]: period[D]

Previously, these would be cast to a NumPy array with object dtype. In general, this should result in better performance when storing an array of intervals or periods in a Seriesopen in new window or column of a DataFrameopen in new window.

Use Series.arrayopen in new window to extract the underlying array of intervals or periods from the Series:

In [36]: ser.array
Out[36]: 
IntervalArray([(0, 1], (1, 2], (2, 3], (3, 4], (4, 5]],
              closed='right',
              dtype='interval[int64]')

In [37]: pser.array
Out[37]: 
<PeriodArray>
['2000-01-01', '2000-01-02', '2000-01-03', '2000-01-04', '2000-01-05']
Length: 5, dtype: period[D]

These return an instance of arrays.IntervalArrayopen in new window or arrays.PeriodArrayopen in new window, the new extension arrays that back interval and period data.

Warning

For backwards compatibility, Series.valuesopen in new window continues to return a NumPy array of objects for Interval and Period data. We recommend using Series.arrayopen in new window when you need the array of data stored in the Series, and Series.to_numpy()open in new window when you know you need a NumPy array.

See Dtypesopen in new window and Attributes and Underlying Dataopen in new window for more.

Joining with two multi-indexes

DataFrame.merge()open in new window and DataFrame.join()open in new window can now be used to join multi-indexed Dataframe instances on the overlapping index levels (GH6360open in new window)

See the Merge, join, and concatenateopen in new window documentation section.

In [38]: index_left = pd.MultiIndex.from_tuples([('K0', 'X0'), ('K0', 'X1'),
   ....:                                        ('K1', 'X2')],
   ....:                                        names=['key', 'X'])
   ....: 

In [39]: left = pd.DataFrame({'A': ['A0', 'A1', 'A2'],
   ....:                      'B': ['B0', 'B1', 'B2']}, index=index_left)
   ....: 

In [40]: index_right = pd.MultiIndex.from_tuples([('K0', 'Y0'), ('K1', 'Y1'),
   ....:                                         ('K2', 'Y2'), ('K2', 'Y3')],
   ....:                                         names=['key', 'Y'])
   ....: 

In [41]: right = pd.DataFrame({'C': ['C0', 'C1', 'C2', 'C3'],
   ....:                       'D': ['D0', 'D1', 'D2', 'D3']}, index=index_right)
   ....: 

In [42]: left.join(right)
Out[42]: 
            A   B   C   D
key X  Y                 
K0  X0 Y0  A0  B0  C0  D0
    X1 Y0  A1  B1  C0  D0
K1  X2 Y1  A2  B2  C1  D1

[3 rows x 4 columns]

For earlier versions this can be done using the following.

In [43]: pd.merge(left.reset_index(), right.reset_index(),
   ....:          on=['key'], how='inner').set_index(['key', 'X', 'Y'])
   ....: 
Out[43]: 
            A   B   C   D
key X  Y                 
K0  X0 Y0  A0  B0  C0  D0
    X1 Y0  A1  B1  C0  D0
K1  X2 Y1  A2  B2  C1  D1

[3 rows x 4 columns]

read_html Enhancements

read_html()open in new window previously ignored colspan and rowspan attributes. Now it understands them, treating them as sequences of cells with the same value. (GH17054open in new window)

In [44]: result = pd.read_html("""
   ....:   <table>
   ....:     <thead>
   ....:       <tr>
   ....:         <th>A</th><th>B</th><th>C</th>
   ....:       </tr>
   ....:     </thead>
   ....:     <tbody>
   ....:       <tr>
   ....:         <td colspan="2">1</td><td>2</td>
   ....:       </tr>
   ....:     </tbody>
   ....:   </table>""")
   ....:

Previous behavior:

In [13]: result
Out [13]:
[   A  B   C
 0  1  2 NaN]

New behavior:

In [45]: result
Out[45]: 
[   A  B  C
 0  1  1  2
 
 [1 rows x 3 columns]]

New Styler.pipe() method

The Styleropen in new window class has gained a pipe()open in new window method. This provides a convenient way to apply users’ predefined styling functions, and can help reduce “boilerplate” when using DataFrame styling functionality repeatedly within a notebook. (GH23229open in new window)

In [46]: df = pd.DataFrame({'N': [1250, 1500, 1750], 'X': [0.25, 0.35, 0.50]})

In [47]: def format_and_align(styler):
   ....:     return (styler.format({'N': '{:,}', 'X': '{:.1%}'})
   ....:                   .set_properties(**{'text-align': 'right'}))
   ....: 

In [48]: df.style.pipe(format_and_align).set_caption('Summary of results.')
Out[48]: <pandas.io.formats.style.Styler at 0x7f660791f748>

Similar methods already exist for other classes in pandas, including DataFrame.pipe()open in new window, GroupBy.pipe()open in new window, and Resampler.pipe()open in new window.

Renaming names in a MultiIndex

DataFrame.rename_axis()open in new window now supports index and columns arguments and Series.rename_axis()open in new window supports index argument (GH19978open in new window).

This change allows a dictionary to be passed so that some of the names of a MultiIndex can be changed.

Example:

In [49]: mi = pd.MultiIndex.from_product([list('AB'), list('CD'), list('EF')],
   ....:                                 names=['AB', 'CD', 'EF'])
   ....: 

In [50]: df = pd.DataFrame([i for i in range(len(mi))], index=mi, columns=['N'])

In [51]: df
Out[51]: 
          N
AB CD EF   
A  C  E   0
      F   1
   D  E   2
      F   3
B  C  E   4
      F   5
   D  E   6
      F   7

[8 rows x 1 columns]

In [52]: df.rename_axis(index={'CD': 'New'})
Out[52]: 
           N
AB New EF   
A  C   E   0
       F   1
   D   E   2
       F   3
B  C   E   4
       F   5
   D   E   6
       F   7

[8 rows x 1 columns]

See the Advanced documentation on renamingopen in new window for more details.

Other enhancements

Backwards incompatible API changes

Pandas 0.24.0 includes a number of API breaking changes.

Increased minimum versions for dependencies

We have updated our minimum supported versions of dependencies (GH21242open in new window, GH18742open in new window, GH23774open in new window, GH24767open in new window). If installed, we now require:

PackageMinimum VersionRequired
numpy1.12.0X
bottleneck1.2.0
fastparquet0.2.1
matplotlib2.0.0
numexpr2.6.1
pandas-gbq0.8.0
pyarrow0.9.0
pytables3.4.2
scipy0.18.1
xlrd1.0.0
pytest (dev)3.6

Additionally we no longer depend on feather-format for feather based storage and replaced it with references to pyarrow (GH21639open in new window and GH23053open in new window).

os.linesep is used for line_terminator of DataFrame.to_csv

DataFrame.to_csv()open in new window now uses os.linesep() rather than '\n' for the default line terminator (GH20353open in new window). This change only affects when running on Windows, where '\r\n' was used for line terminator even when '\n' was passed in line_terminator.

Previous behavior on Windows:

In [1]: data = pd.DataFrame({"string_with_lf": ["a\nbc"],
   ...:                      "string_with_crlf": ["a\r\nbc"]})

In [2]: # When passing file PATH to to_csv,
   ...: # line_terminator does not work, and csv is saved with '\r\n'.
   ...: # Also, this converts all '\n's in the data to '\r\n'.
   ...: data.to_csv("test.csv", index=False, line_terminator='\n')

In [3]: with open("test.csv", mode='rb') as f:
   ...:     print(f.read())
Out[3]: b'string_with_lf,string_with_crlf\r\n"a\r\nbc","a\r\r\nbc"\r\n'

In [4]: # When passing file OBJECT with newline option to
   ...: # to_csv, line_terminator works.
   ...: with open("test2.csv", mode='w', newline='\n') as f:
   ...:     data.to_csv(f, index=False, line_terminator='\n')

In [5]: with open("test2.csv", mode='rb') as f:
   ...:     print(f.read())
Out[5]: b'string_with_lf,string_with_crlf\n"a\nbc","a\r\nbc"\n'

New behavior on Windows:

Passing line_terminator explicitly, set thes line terminator to that character.

In [1]: data = pd.DataFrame({"string_with_lf": ["a\nbc"],
   ...:                      "string_with_crlf": ["a\r\nbc"]})

In [2]: data.to_csv("test.csv", index=False, line_terminator='\n')

In [3]: with open("test.csv", mode='rb') as f:
   ...:     print(f.read())
Out[3]: b'string_with_lf,string_with_crlf\n"a\nbc","a\r\nbc"\n'

On Windows, the value of os.linesep is '\r\n', so if line_terminator is not set, '\r\n' is used for line terminator.

In [1]: data = pd.DataFrame({"string_with_lf": ["a\nbc"],
   ...:                      "string_with_crlf": ["a\r\nbc"]})

In [2]: data.to_csv("test.csv", index=False)

In [3]: with open("test.csv", mode='rb') as f:
   ...:     print(f.read())
Out[3]: b'string_with_lf,string_with_crlf\r\n"a\nbc","a\r\nbc"\r\n'

For file objects, specifying newline is not sufficient to set the line terminator. You must pass in the line_terminator explicitly, even in this case.

In [1]: data = pd.DataFrame({"string_with_lf": ["a\nbc"],
   ...:                      "string_with_crlf": ["a\r\nbc"]})

In [2]: with open("test2.csv", mode='w', newline='\n') as f:
   ...:     data.to_csv(f, index=False)

In [3]: with open("test2.csv", mode='rb') as f:
   ...:     print(f.read())
Out[3]: b'string_with_lf,string_with_crlf\r\n"a\nbc","a\r\nbc"\r\n'

Proper handling of np.NaN in a string data-typed column with the Python engine

There was bug in read_excel()open in new window and read_csv()open in new window with the Python engine, where missing values turned to 'nan' with dtype=str and na_filter=True. Now, these missing values are converted to the string missing indicator, np.nan. (GH20377open in new window)

Previous behavior:

In [5]: data = 'a,b,c\n1,,3\n4,5,6'
In [6]: df = pd.read_csv(StringIO(data), engine='python', dtype=str, na_filter=True)
In [7]: df.loc[0, 'b']
Out[7]:
'nan'

New behavior:

In [53]: data = 'a,b,c\n1,,3\n4,5,6'

In [54]: df = pd.read_csv(StringIO(data), engine='python', dtype=str, na_filter=True)

In [55]: df.loc[0, 'b']
Out[55]: nan

Notice how we now instead output np.nan itself instead of a stringified form of it.

Parsing datetime strings with timezone offsets

Previously, parsing datetime strings with UTC offsets with to_datetime()open in new window or DatetimeIndexopen in new window would automatically convert the datetime to UTC without timezone localization. This is inconsistent from parsing the same datetime string with Timestampopen in new window which would preserve the UTC offset in the tz attribute. Now, to_datetime()open in new window preserves the UTC offset in the tz attribute when all the datetime strings have the same UTC offset (GH17697open in new window, GH11736open in new window, GH22457open in new window)

Previous behavior:

In [2]: pd.to_datetime("2015-11-18 15:30:00+05:30")
Out[2]: Timestamp('2015-11-18 10:00:00')

In [3]: pd.Timestamp("2015-11-18 15:30:00+05:30")
Out[3]: Timestamp('2015-11-18 15:30:00+0530', tz='pytz.FixedOffset(330)')

# Different UTC offsets would automatically convert the datetimes to UTC (without a UTC timezone)
In [4]: pd.to_datetime(["2015-11-18 15:30:00+05:30", "2015-11-18 16:30:00+06:30"])
Out[4]: DatetimeIndex(['2015-11-18 10:00:00', '2015-11-18 10:00:00'], dtype='datetime64[ns]', freq=None)

New behavior:

In [56]: pd.to_datetime("2015-11-18 15:30:00+05:30")
Out[56]: Timestamp('2015-11-18 15:30:00+0530', tz='pytz.FixedOffset(330)')

In [57]: pd.Timestamp("2015-11-18 15:30:00+05:30")
Out[57]: Timestamp('2015-11-18 15:30:00+0530', tz='pytz.FixedOffset(330)')

Parsing datetime strings with the same UTC offset will preserve the UTC offset in the tz

In [58]: pd.to_datetime(["2015-11-18 15:30:00+05:30"] * 2)
Out[58]: DatetimeIndex(['2015-11-18 15:30:00+05:30', '2015-11-18 15:30:00+05:30'], dtype='datetime64[ns, pytz.FixedOffset(330)]', freq=None)

Parsing datetime strings with different UTC offsets will now create an Index of datetime.datetime objects with different UTC offsets

In [59]: idx = pd.to_datetime(["2015-11-18 15:30:00+05:30",
   ....:                       "2015-11-18 16:30:00+06:30"])
   ....: 

In [60]: idx
Out[60]: Index([2015-11-18 15:30:00+05:30, 2015-11-18 16:30:00+06:30], dtype='object')

In [61]: idx[0]
Out[61]: datetime.datetime(2015, 11, 18, 15, 30, tzinfo=tzoffset(None, 19800))

In [62]: idx[1]
Out[62]: datetime.datetime(2015, 11, 18, 16, 30, tzinfo=tzoffset(None, 23400))

Passing utc=True will mimic the previous behavior but will correctly indicate that the dates have been converted to UTC

In [63]: pd.to_datetime(["2015-11-18 15:30:00+05:30",
   ....:                 "2015-11-18 16:30:00+06:30"], utc=True)
   ....: 
Out[63]: DatetimeIndex(['2015-11-18 10:00:00+00:00', '2015-11-18 10:00:00+00:00'], dtype='datetime64[ns, UTC]', freq=None)

Parsing mixed-timezones with read_csv()

read_csv()open in new window no longer silently converts mixed-timezone columns to UTC (GH24987open in new window).

Previous behavior

>>> import io
>>> content = """\
... a
... 2000-01-01T00:00:00+05:00
... 2000-01-01T00:00:00+06:00"""
>>> df = pd.read_csv(io.StringIO(content), parse_dates=['a'])
>>> df.a
0   1999-12-31 19:00:00
1   1999-12-31 18:00:00
Name: a, dtype: datetime64[ns]

New behavior

In [64]: import io

In [65]: content = """\
   ....: a
   ....: 2000-01-01T00:00:00+05:00
   ....: 2000-01-01T00:00:00+06:00"""
   ....: 

In [66]: df = pd.read_csv(io.StringIO(content), parse_dates=['a'])

In [67]: df.a
Out[67]: 
0    2000-01-01 00:00:00+05:00
1    2000-01-01 00:00:00+06:00
Name: a, Length: 2, dtype: object

As can be seen, the dtype is object; each value in the column is a string. To convert the strings to an array of datetimes, the date_parser argument

In [68]: df = pd.read_csv(io.StringIO(content), parse_dates=['a'],
   ....:                  date_parser=lambda col: pd.to_datetime(col, utc=True))
   ....: 

In [69]: df.a
Out[69]: 
0   1999-12-31 19:00:00+00:00
1   1999-12-31 18:00:00+00:00
Name: a, Length: 2, dtype: datetime64[ns, UTC]

See Parsing datetime strings with timezone offsets for more.

Time values in dt.end_time and to_timestamp(how='end')

The time values in Periodopen in new window and PeriodIndexopen in new window objects are now set to ‘23:59:59.999999999’ when calling Series.dt.end_timeopen in new window, Period.end_timeopen in new window, PeriodIndex.end_timeopen in new window, Period.to_timestamp()open in new window with how='end', or PeriodIndex.to_timestamp()open in new window with how='end' (GH17157open in new window)

Previous behavior:

In [2]: p = pd.Period('2017-01-01', 'D')
In [3]: pi = pd.PeriodIndex([p])

In [4]: pd.Series(pi).dt.end_time[0]
Out[4]: Timestamp(2017-01-01 00:00:00)

In [5]: p.end_time
Out[5]: Timestamp(2017-01-01 23:59:59.999999999)

New behavior:

Calling Series.dt.end_timeopen in new window will now result in a time of ‘23:59:59.999999999’ as is the case with Period.end_timeopen in new window, for example

In [70]: p = pd.Period('2017-01-01', 'D')

In [71]: pi = pd.PeriodIndex([p])

In [72]: pd.Series(pi).dt.end_time[0]
Out[72]: Timestamp('2017-01-01 23:59:59.999999999')

In [73]: p.end_time
Out[73]: Timestamp('2017-01-01 23:59:59.999999999')

Series.unique for Timezone-Aware Data

The return type of Series.unique()open in new window for datetime with timezone values has changed from an numpy.ndarrayopen in new window of Timestampopen in new window objects to a arrays.DatetimeArrayopen in new window (GH24024open in new window).

In [74]: ser = pd.Series([pd.Timestamp('2000', tz='UTC'),
   ....:                  pd.Timestamp('2000', tz='UTC')])
   ....:

Previous behavior:

In [3]: ser.unique()
Out[3]: array([Timestamp('2000-01-01 00:00:00+0000', tz='UTC')], dtype=object)

New behavior:

In [75]: ser.unique()
Out[75]: 
<DatetimeArray>
['2000-01-01 00:00:00+00:00']
Length: 1, dtype: datetime64[ns, UTC]

Sparse data structure refactor

SparseArray, the array backing SparseSeries and the columns in a SparseDataFrame, is now an extension array (GH21978open in new window, GH19056open in new window, GH22835open in new window). To conform to this interface and for consistency with the rest of pandas, some API breaking changes were made:

  • SparseArray is no longer a subclass of numpy.ndarrayopen in new window. To convert a SparseArray to a NumPy array, use numpy.asarray()open in new window.
  • SparseArray.dtype and SparseSeries.dtype are now instances of SparseDtypeopen in new window, rather than np.dtype. Access the underlying dtype with SparseDtype.subtype.
  • numpy.asarray(sparse_array) now returns a dense array with all the values, not just the non-fill-value values (GH14167open in new window)
  • SparseArray.take now matches the API of pandas.api.extensions.ExtensionArray.take()open in new window (GH19506open in new window): The default value of allow_fill has changed from False to True. The out and mode parameters are now longer accepted (previously, this raised if they were specified). Passing a scalar for indices is no longer allowed.
  • The default value of allow_fill has changed from False to True.
  • The out and mode parameters are now longer accepted (previously, this raised if they were specified).
  • Passing a scalar for indices is no longer allowed.
  • The result of concat()open in new window with a mix of sparse and dense Series is a Series with sparse values, rather than a SparseSeries.
  • SparseDataFrame.combine and DataFrame.combine_first no longer supports combining a sparse column with a dense column while preserving the sparse subtype. The result will be an object-dtype SparseArray.
  • Setting SparseArray.fill_value to a fill value with a different dtype is now allowed.
  • DataFrame[column] is now a Seriesopen in new window with sparse values, rather than a SparseSeries, when slicing a single column with sparse values (GH23559open in new window).
  • The result of Series.where()open in new window is now a Series with sparse values, like with other extension arrays (GH24077open in new window)

Some new warnings are issued for operations that require or are likely to materialize a large dense array:

In addition to these API breaking changes, many Performance Improvements and Bug Fixes have been made.

Finally, a Series.sparse accessor was added to provide sparse-specific methods like Series.sparse.from_coo()open in new window.

In [76]: s = pd.Series([0, 0, 1, 1, 1], dtype='Sparse[int]')

In [77]: s.sparse.density
Out[77]: 0.6

get_dummies() always returns a DataFrame

Previously, when sparse=True was passed to get_dummies()open in new window, the return value could be either a DataFrameopen in new window or a SparseDataFrame, depending on whether all or a just a subset of the columns were dummy-encoded. Now, a DataFrameopen in new window is always returned (GH24284open in new window).

Previous behavior

The first get_dummies()open in new window returns a DataFrameopen in new window because the column A is not dummy encoded. When just ["B", "C"] are passed to get_dummies, then all the columns are dummy-encoded, and a SparseDataFrame was returned.

In [2]: df = pd.DataFrame({"A": [1, 2], "B": ['a', 'b'], "C": ['a', 'a']})

In [3]: type(pd.get_dummies(df, sparse=True))
Out[3]: pandas.core.frame.DataFrame

In [4]: type(pd.get_dummies(df[['B', 'C']], sparse=True))
Out[4]: pandas.core.sparse.frame.SparseDataFrame

New behavior

Now, the return type is consistently a DataFrameopen in new window.

In [78]: type(pd.get_dummies(df, sparse=True))
Out[78]: pandas.core.frame.DataFrame

In [79]: type(pd.get_dummies(df[['B', 'C']], sparse=True))
Out[79]: pandas.core.frame.DataFrame

Note

There’s no difference in memory usage between a SparseDataFrame and a DataFrameopen in new window with sparse values. The memory usage will be the same as in the previous version of pandas.

Raise ValueError in DataFrame.to_dict(orient='index')

Bug in DataFrame.to_dict()open in new window raises ValueError when used with orient='index' and a non-unique index instead of losing data (GH22801open in new window)

In [80]: df = pd.DataFrame({'a': [1, 2], 'b': [0.5, 0.75]}, index=['A', 'A'])

In [81]: df
Out[81]: 
   a     b
A  1  0.50
A  2  0.75

[2 rows x 2 columns]

In [82]: df.to_dict(orient='index')
---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
<ipython-input-82-f5309a7c6adb> in <module>
----> 1 df.to_dict(orient='index')

/pandas/pandas/core/frame.py in to_dict(self, orient, into)
   1369         elif orient.lower().startswith("i"):
   1370             if not self.index.is_unique:
-> 1371                 raise ValueError("DataFrame index must be unique for orient='index'.")
   1372             return into_c(
   1373                 (t[0], dict(zip(self.columns, t[1:])))

ValueError: DataFrame index must be unique for orient='index'.

Tick DateOffset normalize restrictions

Creating a Tick object (Day, Hour, Minute, Second, Milli, Micro, Nano) with normalize=True is no longer supported. This prevents unexpected behavior where addition could fail to be monotone or associative. (GH21427open in new window)

Previous behavior:

In [2]: ts = pd.Timestamp('2018-06-11 18:01:14')

In [3]: ts
Out[3]: Timestamp('2018-06-11 18:01:14')

In [4]: tic = pd.offsets.Hour(n=2, normalize=True)
   ...:

In [5]: tic
Out[5]: <2 * Hours>

In [6]: ts + tic
Out[6]: Timestamp('2018-06-11 00:00:00')

In [7]: ts + tic + tic + tic == ts + (tic + tic + tic)
Out[7]: False

New behavior:

In [83]: ts = pd.Timestamp('2018-06-11 18:01:14')

In [84]: tic = pd.offsets.Hour(n=2)

In [85]: ts + tic + tic + tic == ts + (tic + tic + tic)
Out[85]: True

Period subtraction

Subtraction of a Period from another Period will give a DateOffset. instead of an integer (GH21314open in new window)

Previous behavior:

In [2]: june = pd.Period('June 2018')

In [3]: april = pd.Period('April 2018')

In [4]: june - april
Out [4]: 2

New behavior:

In [86]: june = pd.Period('June 2018')

In [87]: april = pd.Period('April 2018')

In [88]: june - april
Out[88]: <2 * MonthEnds>

Similarly, subtraction of a Period from a PeriodIndex will now return an Index of DateOffset objects instead of an Int64Index

Previous behavior:

In [2]: pi = pd.period_range('June 2018', freq='M', periods=3)

In [3]: pi - pi[0]
Out[3]: Int64Index([0, 1, 2], dtype='int64')

New behavior:

In [89]: pi = pd.period_range('June 2018', freq='M', periods=3)

In [90]: pi - pi[0]
Out[90]: Index([<0 * MonthEnds>, <MonthEnd>, <2 * MonthEnds>], dtype='object')

Addition/subtraction of NaN from DataFrame

Adding or subtracting NaN from a DataFrameopen in new window column with timedelta64[ns] dtype will now raise a TypeError instead of returning all-NaT. This is for compatibility with TimedeltaIndex and Series behavior (GH22163open in new window)

In [91]: df = pd.DataFrame([pd.Timedelta(days=1)])

In [92]: df
Out[92]: 
       0
0 1 days

[1 rows x 1 columns]

Previous behavior:

In [4]: df = pd.DataFrame([pd.Timedelta(days=1)])

In [5]: df - np.nan
Out[5]:
    0
0 NaT

New behavior:

In [2]: df - np.nan
...
TypeError: unsupported operand type(s) for -: 'TimedeltaIndex' and 'float'

DataFrame comparison operations broadcasting changes

Previously, the broadcasting behavior of DataFrameopen in new window comparison operations (==, !=, …) was inconsistent with the behavior of arithmetic operations (+, -, …). The behavior of the comparison operations has been changed to match the arithmetic operations in these cases. (GH22880open in new window)

The affected cases are:

In [93]: arr = np.arange(6).reshape(3, 2)

In [94]: df = pd.DataFrame(arr)

In [95]: df
Out[95]: 
   0  1
0  0  1
1  2  3
2  4  5

[3 rows x 2 columns]

Previous behavior:

In [5]: df == arr[[0], :]
    ...: # comparison previously broadcast where arithmetic would raise
Out[5]:
       0      1
0   True   True
1  False  False
2  False  False
In [6]: df + arr[[0], :]
...
ValueError: Unable to coerce to DataFrame, shape must be (3, 2): given (1, 2)

In [7]: df == (1, 2)
    ...: # length matches number of columns;
    ...: # comparison previously raised where arithmetic would broadcast
...
ValueError: Invalid broadcasting comparison [(1, 2)] with block values
In [8]: df + (1, 2)
Out[8]:
   0  1
0  1  3
1  3  5
2  5  7

In [9]: df == (1, 2, 3)
    ...:  # length matches number of rows
    ...:  # comparison previously broadcast where arithmetic would raise
Out[9]:
       0      1
0  False   True
1   True  False
2  False  False
In [10]: df + (1, 2, 3)
...
ValueError: Unable to coerce to Series, length must be 2: given 3

New behavior:

# Comparison operations and arithmetic operations both broadcast.
In [96]: df == arr[[0], :]
Out[96]: 
       0      1
0   True   True
1  False  False
2  False  False

[3 rows x 2 columns]

In [97]: df + arr[[0], :]
Out[97]: 
   0  1
0  0  2
1  2  4
2  4  6

[3 rows x 2 columns]
# Comparison operations and arithmetic operations both broadcast.
In [98]: df == (1, 2)
Out[98]: 
       0      1
0  False  False
1  False  False
2  False  False

[3 rows x 2 columns]

In [99]: df + (1, 2)
Out[99]: 
   0  1
0  1  3
1  3  5
2  5  7

[3 rows x 2 columns]
# Comparison operations and arithmetic operations both raise ValueError.
In [6]: df == (1, 2, 3)
...
ValueError: Unable to coerce to Series, length must be 2: given 3

In [7]: df + (1, 2, 3)
...
ValueError: Unable to coerce to Series, length must be 2: given 3

DataFrame arithmetic operations broadcasting changes

DataFrameopen in new window arithmetic operations when operating with 2-dimensional np.ndarray objects now broadcast in the same way as np.ndarray broadcast. (GH23000open in new window)

In [100]: arr = np.arange(6).reshape(3, 2)

In [101]: df = pd.DataFrame(arr)

In [102]: df
Out[102]: 
   0  1
0  0  1
1  2  3
2  4  5

[3 rows x 2 columns]

Previous behavior:

In [5]: df + arr[[0], :]   # 1 row, 2 columns
...
ValueError: Unable to coerce to DataFrame, shape must be (3, 2): given (1, 2)
In [6]: df + arr[:, [1]]   # 1 column, 3 rows
...
ValueError: Unable to coerce to DataFrame, shape must be (3, 2): given (3, 1)

New behavior:

In [103]: df + arr[[0], :]   # 1 row, 2 columns
Out[103]: 
   0  1
0  0  2
1  2  4
2  4  6

[3 rows x 2 columns]

In [104]: df + arr[:, [1]]   # 1 column, 3 rows
Out[104]: 
   0   1
0  1   2
1  5   6
2  9  10

[3 rows x 2 columns]

Series and Index data-dtype incompatibilities

Series and Index constructors now raise when the data is incompatible with a passed dtype= (GH15832open in new window)

Previous behavior:

In [4]: pd.Series([-1], dtype="uint64")
Out [4]:
0    18446744073709551615
dtype: uint64

New behavior:

In [4]: pd.Series([-1], dtype="uint64")
Out [4]:
...
OverflowError: Trying to coerce negative values to unsigned integers

Concatenation Changes

Calling pandas.concat()open in new window on a Categorical of ints with NA values now causes them to be processed as objects when concatenating with anything other than another Categorical of ints (GH19214open in new window)

In [105]: s = pd.Series([0, 1, np.nan])

In [106]: c = pd.Series([0, 1, np.nan], dtype="category")

Previous behavior

In [3]: pd.concat([s, c])
Out[3]:
0    0.0
1    1.0
2    NaN
0    0.0
1    1.0
2    NaN
dtype: float64

New behavior

In [107]: pd.concat([s, c])
Out[107]: 
0      0
1      1
2    NaN
0      0
1      1
2    NaN
Length: 6, dtype: object

Datetimelike API changes

Other API changes

Extension type changes

Equality and hashability

Pandas now requires that extension dtypes be hashable (i.e. the respective ExtensionDtype objects; hashability is not a requirement for the values of the corresponding ExtensionArray). The base class implements a default __eq__ and __hash__. If you have a parametrized dtype, you should update the ExtensionDtype._metadata tuple to match the signature of your __init__ method. See pandas.api.extensions.ExtensionDtypeopen in new window for more (GH22476open in new window).

New and changed methods

Dtype changes

  • ExtensionDtype has gained the ability to instantiate from string dtypes, e.g. decimal would instantiate a registered DecimalDtype; furthermore the ExtensionDtype has gained the method construct_array_type (GH21185open in new window)
  • Added ExtensionDtype._is_numeric for controlling whether an extension dtype is considered numeric (GH22290open in new window).
  • Added pandas.api.types.register_extension_dtype() to register an extension type with pandas (GH22664open in new window)
  • Updated the .type attribute for PeriodDtype, DatetimeTZDtype, and IntervalDtype to be instances of the dtype (Period, Timestamp, and Interval respectively) (GH22938open in new window)

Operator support

A Series based on an ExtensionArray now supports arithmetic and comparison operators (GH19577open in new window). There are two approaches for providing operator support for an ExtensionArray:

  1. Define each of the operators on your ExtensionArray subclass.
  2. Use an operator implementation from pandas that depends on operators that are already defined on the underlying elements (scalars) of the ExtensionArray.

See the ExtensionArray Operator Supportopen in new window documentation section for details on both ways of adding operator support.

Other changes

Bug fixes

Deprecations

Integer addition/subtraction with datetimes and timedeltas is deprecated

In the past, users could—in some cases—add or subtract integers or integer-dtype arrays from Timestampopen in new window, DatetimeIndexopen in new window and TimedeltaIndexopen in new window.

This usage is now deprecated. Instead add or subtract integer multiples of the object’s freq attribute (GH21939open in new window, GH23878open in new window).

Previous behavior:

In [5]: ts = pd.Timestamp('1994-05-06 12:15:16', freq=pd.offsets.Hour())
In [6]: ts + 2
Out[6]: Timestamp('1994-05-06 14:15:16', freq='H')

In [7]: tdi = pd.timedelta_range('1D', periods=2)
In [8]: tdi - np.array([2, 1])
Out[8]: TimedeltaIndex(['-1 days', '1 days'], dtype='timedelta64[ns]', freq=None)

In [9]: dti = pd.date_range('2001-01-01', periods=2, freq='7D')
In [10]: dti + pd.Index([1, 2])
Out[10]: DatetimeIndex(['2001-01-08', '2001-01-22'], dtype='datetime64[ns]', freq=None)

New behavior:

In [108]: ts = pd.Timestamp('1994-05-06 12:15:16', freq=pd.offsets.Hour())

In [109]: ts + 2 * ts.freq
Out[109]: Timestamp('1994-05-06 14:15:16', freq='H')

In [110]: tdi = pd.timedelta_range('1D', periods=2)

In [111]: tdi - np.array([2 * tdi.freq, 1 * tdi.freq])
Out[111]: TimedeltaIndex(['-1 days', '1 days'], dtype='timedelta64[ns]', freq=None)

In [112]: dti = pd.date_range('2001-01-01', periods=2, freq='7D')

In [113]: dti + pd.Index([1 * dti.freq, 2 * dti.freq])
Out[113]: DatetimeIndex(['2001-01-08', '2001-01-22'], dtype='datetime64[ns]', freq=None)

Passing integer data and a timezone to datetimeindex

The behavior of DatetimeIndexopen in new window when passed integer data and a timezone is changing in a future version of pandas. Previously, these were interpreted as wall times in the desired timezone. In the future, these will be interpreted as wall times in UTC, which are then converted to the desired timezone (GH24559open in new window).

The default behavior remains the same, but issues a warning:

In [3]: pd.DatetimeIndex([946684800000000000], tz="US/Central")
/bin/ipython:1: FutureWarning:
    Passing integer-dtype data and a timezone to DatetimeIndex. Integer values
    will be interpreted differently in a future version of pandas. Previously,
    these were viewed as datetime64[ns] values representing the wall time
    *in the specified timezone*. In the future, these will be viewed as
    datetime64[ns] values representing the wall time *in UTC*. This is similar
    to a nanosecond-precision UNIX epoch. To accept the future behavior, use

        pd.to_datetime(integer_data, utc=True).tz_convert(tz)

    To keep the previous behavior, use

        pd.to_datetime(integer_data).tz_localize(tz)

 #!/bin/python3
 Out[3]: DatetimeIndex(['2000-01-01 00:00:00-06:00'], dtype='datetime64[ns, US/Central]', freq=None)

As the warning message explains, opt in to the future behavior by specifying that the integer values are UTC, and then converting to the final timezone:

In [114]: pd.to_datetime([946684800000000000], utc=True).tz_convert('US/Central')
Out[114]: DatetimeIndex(['1999-12-31 18:00:00-06:00'], dtype='datetime64[ns, US/Central]', freq=None)

The old behavior can be retained with by localizing directly to the final timezone:

In [115]: pd.to_datetime([946684800000000000]).tz_localize('US/Central')
Out[115]: DatetimeIndex(['2000-01-01 00:00:00-06:00'], dtype='datetime64[ns, US/Central]', freq=None)

Converting timezone-aware Series and Index to NumPy arrays

The conversion from a Seriesopen in new window or Indexopen in new window with timezone-aware datetime data will change to preserve timezones by default (GH23569open in new window).

NumPy doesn’t have a dedicated dtype for timezone-aware datetimes. In the past, converting a Seriesopen in new window or DatetimeIndexopen in new window with timezone-aware datatimes would convert to a NumPy array by

  1. converting the tz-aware data to UTC
  2. dropping the timezone-info
  3. returning a numpy.ndarrayopen in new window with datetime64[ns] dtype

Future versions of pandas will preserve the timezone information by returning an object-dtype NumPy array where each value is a Timestampopen in new window with the correct timezone attached

In [116]: ser = pd.Series(pd.date_range('2000', periods=2, tz="CET"))

In [117]: ser
Out[117]: 
0   2000-01-01 00:00:00+01:00
1   2000-01-02 00:00:00+01:00
Length: 2, dtype: datetime64[ns, CET]

The default behavior remains the same, but issues a warning

In [8]: np.asarray(ser)
/bin/ipython:1: FutureWarning: Converting timezone-aware DatetimeArray to timezone-naive
      ndarray with 'datetime64[ns]' dtype. In the future, this will return an ndarray
      with 'object' dtype where each element is a 'pandas.Timestamp' with the correct 'tz'.

        To accept the future behavior, pass 'dtype=object'.
        To keep the old behavior, pass 'dtype="datetime64[ns]"'.
  #!/bin/python3
Out[8]:
array(['1999-12-31T23:00:00.000000000', '2000-01-01T23:00:00.000000000'],
      dtype='datetime64[ns]')

The previous or future behavior can be obtained, without any warnings, by specifying the dtype

Previous behavior

In [118]: np.asarray(ser, dtype='datetime64[ns]')
Out[118]: 
array(['1999-12-31T23:00:00.000000000', '2000-01-01T23:00:00.000000000'],
      dtype='datetime64[ns]')

Future behavior

# New behavior
In [119]: np.asarray(ser, dtype=object)
Out[119]: 
array([Timestamp('2000-01-01 00:00:00+0100', tz='CET', freq='D'),
       Timestamp('2000-01-02 00:00:00+0100', tz='CET', freq='D')],
      dtype=object)

Or by using Series.to_numpy()open in new window

In [120]: ser.to_numpy()
Out[120]: 
array([Timestamp('2000-01-01 00:00:00+0100', tz='CET', freq='D'),
       Timestamp('2000-01-02 00:00:00+0100', tz='CET', freq='D')],
      dtype=object)

In [121]: ser.to_numpy(dtype="datetime64[ns]")
Out[121]: 
array(['1999-12-31T23:00:00.000000000', '2000-01-01T23:00:00.000000000'],
      dtype='datetime64[ns]')

All the above applies to a DatetimeIndexopen in new window with tz-aware values as well.

Removal of prior version deprecations/changes

Performance improvements

Bug fixes

Categorical

Datetimelike

Timedelta

Timezones

Offsets

Numeric

Conversion

Strings

Interval

Indexing

Missing

MultiIndex

I/O

Plotting

Groupby/resample/rolling

Reshaping

Sparse

Style

Build changes

Other

  • Bug where C variables were declared with external linkage causing import errors if certain other C libraries were imported before Pandas. (GH24113open in new window)

Contributors

A total of 516 people contributed patches to this release. People with a “+” by their names contributed a patch for the first time.

  • 1_x7 +
  • AJ Dyka +
  • AJ Pryor, Ph.D +
  • Aaron Critchley
  • Abdullah İhsan Seçer +
  • Adam Bull +
  • Adam Hooper
  • Adam J. Stewart
  • Adam Kim
  • Adam Klimont +
  • Addison Lynch +
  • Alan Hogue +
  • Albert Villanova del Moral
  • Alex Radu +
  • Alex Rychyk
  • Alex Strick van Linschoten +
  • Alex Volkov +
  • Alex Watt +
  • AlexTereshenkov +
  • Alexander Buchkovsky
  • Alexander Hendorf +
  • Alexander Hess +
  • Alexander Nordin +
  • Alexander Ponomaroff +
  • Alexandre Batisse +
  • Alexandre Decan +
  • Allen Downey +
  • Allison Browne +
  • Aly Sivji
  • Alyssa Fu Ward +
  • Andrew
  • Andrew Gaspari +
  • Andrew Gross +
  • Andrew Spott +
  • Andrew Wood +
  • Andy +
  • Aniket uttam +
  • Anjali2019 +
  • Anjana S +
  • Antoine Viscardi +
  • Antonio Gutierrez +
  • Antti Kaihola +
  • Anudeep Tubati +
  • Arjun Sharma +
  • Armin Varshokar
  • Arno Veenstra +
  • Artem Bogachev
  • ArtinSarraf +
  • Barry Fitzgerald +
  • Bart Aelterman +
  • Batalex +
  • Baurzhan Muftakhidinov
  • Ben James +
  • Ben Nelson +
  • Benjamin Grove +
  • Benjamin Rowell +
  • Benoit Paquet +
  • Bharat Raghunathan +
  • Bhavani Ravi +
  • Big Head +
  • Boris Lau +
  • Brett Naul
  • Brett Randall +
  • Brian Choi +
  • Bryan Cutler +
  • C John Klehm +
  • C.A.M. Gerlach +
  • Caleb Braun +
  • Carl Johan +
  • Cecilia +
  • Chalmer Lowe
  • Chang She
  • Charles David +
  • Cheuk Ting Ho
  • Chris
  • Chris Bertinato +
  • Chris Roberts +
  • Chris Stadler +
  • Christian Haege +
  • Christian Hudon
  • Christopher Whelan
  • Chu Qing Hao +
  • Chuanzhu Xu +
  • Clemens Brunner
  • Da Cheezy Mobsta +
  • Damian Kula +
  • Damini Satya
  • Daniel Himmelstein
  • Daniel Hrisca +
  • Daniel Luis Costa +
  • Daniel Saxton +
  • DanielFEvans +
  • Darcy Meyer +
  • DataOmbudsman
  • David Arcos
  • David Krych
  • David Liu +
  • Dean Langsam +
  • Deepyaman Datta +
  • Denis Belavin +
  • Devin Petersohn +
  • Diane Trout +
  • Diego Argueta +
  • Diego Torres +
  • Dobatymo +
  • Doug Latornell +
  • Dr. Irv
  • Dylan Dmitri Gray +
  • EdAbati +
  • Enrico Rotundo +
  • Eric Boxer +
  • Eric Chea
  • Erik +
  • Erik Nilsson +
  • EternalLearner42 +
  • Evan +
  • Evan Livelo +
  • Fabian Haase +
  • Fabian Retkowski
  • Fabian Rost +
  • Fabien Aulaire +
  • Fakabbir Amin +
  • Fei Phoon +
  • Fernando Margueirat +
  • Flavien Lambert +
  • Florian Müller +
  • Florian Rathgeber +
  • Frank Hoang +
  • Fábio Rosado +
  • Gabe Fernando
  • Gabriel Reid +
  • Gaibo Zhang +
  • Giftlin Rajaiah
  • Gioia Ballin +
  • Giuseppe Romagnuolo +
  • Gjelt
  • Gordon Blackadder +
  • Gosuke Shibahara +
  • Graham Inggs
  • Gregory Rome +
  • Guillaume Gay
  • Guillaume Lemaitre +
  • HHest +
  • Hannah Ferchland
  • Haochen Wu
  • Hielke Walinga +
  • How Si Wei +
  • Hubert +
  • HubertKl +
  • Huize Wang +
  • Hyukjin Kwon +
  • HyunTruth +
  • Iain Barr
  • Ian Dunn +
  • Ignacio Vergara Kausel +
  • Inevitable-Marzipan +
  • Irv Lustig +
  • IsvenC +
  • JElfner +
  • Jacob Bundgaard +
  • Jacopo Rota
  • Jakob Jarmar +
  • James Bourbeau +
  • James Cobon-Kerr +
  • James Myatt +
  • James Winegar +
  • Jan Rudolph
  • Jan-Philip Gehrcke +
  • Jared Groves +
  • Jarrod Millman +
  • Jason Kiley +
  • Javad Noorbakhsh +
  • Jay Offerdahl +
  • Jayanth Katuri +
  • Jeff Reback
  • Jeongmin Yu +
  • Jeremy Schendel
  • Jerod Estapa +
  • Jesper Dramsch +
  • Jiang Yue +
  • Jim Jeon +
  • Joe Jevnik
  • Joel Nothman
  • Joel Ostblom +
  • Johan von Forstner +
  • Johnny Chiu +
  • Jonas +
  • Jonathon Vandezande +
  • Jop Vermeer +
  • Jordi Contestí
  • Jorge López Fueyo +
  • Joris Van den Bossche
  • Jose Quinones +
  • Jose Rivera-Rubio +
  • Josh
  • Josh Friedlander +
  • Jun +
  • Justin Zheng +
  • Kaiqi Dong +
  • Kalyan Gokhale
  • Kane +
  • Kang Yoosam +
  • Kapil Patel +
  • Kara de la Marck +
  • Karl Dunkle Werner +
  • Karmanya Aggarwal +
  • Katherine Surta +
  • Katrin Leinweber +
  • Kendall Masse +
  • Kevin Markham +
  • Kevin Sheppard
  • Kimi Li +
  • Koustav Samaddar +
  • Krishna +
  • Kristian Holsheimer +
  • Ksenia Gueletina +
  • Kyle Kosic +
  • Kyle Prestel +
  • LJ +
  • LeakedMemory +
  • Li Jin +
  • Licht Takeuchi
  • Lorenzo Stella +
  • Luca Donini +
  • Luciano Viola +
  • Maarten Rietbergen +
  • Mak Sze Chun +
  • Marc Garcia
  • Marius Potgieter +
  • Mark Sikora +
  • Markus Meier +
  • Marlene Silva Marchena +
  • Martin Babka +
  • MatanCohe +
  • Mateusz Woś +
  • Mathew Topper +
  • Matias Heikkilä
  • Mats Maiwald +
  • Matt Boggess +
  • Matt Cooper +
  • Matt Williams +
  • Matthew Gilbert
  • Matthew Roeschke
  • Max Bolingbroke +
  • Max Kanter
  • Max Kovalovs +
  • Max van Deursen +
  • Michael
  • Michael Davis +
  • Michael Odintsov
  • Michael P. Moran +
  • Michael Silverstein +
  • Michael-J-Ward +
  • Mickaël Schoentgen +
  • Miguel Sánchez de León Peque +
  • Mike Cramblett +
  • Min ho Kim +
  • Ming Li
  • Misha Veldhoen +
  • Mitar
  • Mitch Negus
  • Monson Shao +
  • Moonsoo Kim +
  • Mortada Mehyar
  • Mukul Ashwath Ram +
  • MusTheDataGuy +
  • Myles Braithwaite
  • Nanda H Krishna +
  • Nehil Jain +
  • Nicholas Musolino +
  • Nicolas Dickreuter +
  • Nikhil Kumar Mengani +
  • Nikoleta Glynatsi +
  • Noam Hershtig +
  • Noora Husseini +
  • Ondrej Kokes
  • Pablo Ambrosio +
  • Pamela Wu +
  • Parfait G +
  • Patrick Park +
  • Paul
  • Paul Ganssle
  • Paul Reidy
  • Paul van Mulbregt +
  • Pauli Virtanen
  • Pav A +
  • Philippe Ombredanne +
  • Phillip Cloud
  • Pietro Battiston
  • Piyush Aggarwal +
  • Prabakaran Kumaresshan +
  • Pulkit Maloo
  • Pyry Kovanen
  • Rajib Mitra +
  • Redonnet Louis +
  • Rhys Parry +
  • Richard Eames +
  • Rick +
  • Robin
  • Roei.r +
  • RomainSa +
  • Roman Imankulov +
  • Roman Yurchak +
  • Ruijing Li +
  • Ryan +
  • Ryan Joyce +
  • Ryan Nazareth +
  • Ryan Rehman +
  • Rüdiger Busche +
  • SEUNG HOON, SHIN +
  • Sakar Panta +
  • Samuel Sinayoko
  • Sandeep Pathak +
  • Sandrine Pataut +
  • Sangwoong Yoon
  • Santosh Kumar +
  • Saurav Chakravorty +
  • Scott McAllister +
  • Scott Talbert +
  • Sean Chan +
  • Sergey Kopylov +
  • Shadi Akiki +
  • Shantanu Gontia +
  • Shengpu Tang +
  • Shirish Kadam +
  • Shivam Rana +
  • Shorokhov Sergey +
  • Simon Hawkins +
  • Simon Riddell +
  • Simone Basso
  • Sinhrks
  • Soyoun(Rose) Kim +
  • Srinivas Reddy Thatiparthy (శ్రీనివాస్ రెడ్డి తాటిపర్తి) +
  • Stefaan Lippens +
  • Stefano Cianciulli
  • Stefano Miccoli +
  • Stephan Hoyer
  • Stephen Childs
  • Stephen Cowley +
  • Stephen Pascoe
  • Stephen Rauch
  • Sterling Paramore +
  • Steve Baker +
  • Steve Cook +
  • Steve Dower +
  • Steven +
  • Stijn Van Hoey
  • Stéphan Taljaard +
  • Sumanau Sareen +
  • Sumin Byeon +
  • Sören +
  • Takuya N +
  • Tamas Nagy +
  • Tan Tran +
  • Tanya Jain +
  • Tao He +
  • Tarbo Fukazawa
  • Terji Petersen +
  • Thein Oo +
  • Thiago Cordeiro da Fonseca +
  • ThibTrip +
  • Thierry Moisan
  • Thijs Damsma +
  • Thiviyan Thanapalasingam +
  • Thomas A Caswell
  • Thomas Kluiters +
  • Thomas Lentali +
  • Tilen Kusterle +
  • Tim D. Smith +
  • Tim Gates +
  • Tim Hoffmann
  • Tim Swast
  • Tom Augspurger
  • Tom Neep +
  • Tomasz Kluczkowski +
  • Tomáš Chvátal +
  • Tony Tao +
  • Triple0 +
  • Troels Nielsen +
  • Tuhin Mahmud +
  • Tyler Reddy +
  • Uddeshya Singh
  • Uwe L. Korn +
  • Vadym Barda +
  • Vaibhav Vishal +
  • Varad Gunjal +
  • Vasily Litvinov +
  • Vibhu Agarwal +
  • Victor Maryama +
  • Victor Villas
  • Vikramjeet Das +
  • Vincent La
  • Vitória Helena +
  • Vladislav +
  • Vu Le
  • Vyom Jain +
  • Víctor Moron Tejero +
  • Weiwen Gu +
  • Wenhuan
  • Wes Turner
  • Wil Tan +
  • Will Ayd +
  • William Ayd
  • Wouter De Coster +
  • Yeojin Kim +
  • Yitzhak Andrade +
  • Yoann Goular +
  • Yuecheng Wu +
  • Yuliya Dovzhenko +
  • Yury Bayda +
  • Zac Hatfield-Dodds +
  • Zach Angell +
  • aberres +
  • aeltanawy +
  • ailchau +
  • alimcmaster1
  • alphaCTzo7G +
  • amphy +
  • anmyachev +
  • araraonline +
  • azure-pipelines[bot] +
  • benarthur91 +
  • bk521234 +
  • cgangwar11 +
  • chris-b1
  • cxl923cc +
  • dahlbaek +
  • danielplawrence +
  • dannyhyunkim +
  • darke-spirits +
  • david-liu-brattle-1
  • davidmvalente +
  • deflatSOCO
  • doosik_bae +
  • dylanchase +
  • eduardo naufel schettino +
  • endenis +
  • enisnazif +
  • euri10 +
  • evangelineliu +
  • ezcitron +
  • fengyqf +
  • fjdiod
  • fjetter
  • fl4p +
  • fleimgruber +
  • froessler
  • gfyoung
  • gwrome +
  • h-vetinari
  • haison +
  • hannah-c +
  • harisbal +
  • heckeop +
  • henriqueribeiro +
  • himanshu awasthi
  • hongshaoyang +
  • iamshwin +
  • igorfassen +
  • jalazbe +
  • jamesoliverh +
  • jbrockmendel
  • jh-wu +
  • jkovacevic +
  • justinchan23 +
  • killerontherun1 +
  • knuu +
  • kpapdac +
  • kpflugshaupt +
  • krsnik93 +
  • leerssej +
  • louispotok
  • lrjball +
  • marcosrullan +
  • mazayo +
  • miker985
  • nathalier +
  • nicolab100 +
  • nprad
  • nrebena +
  • nsuresh +
  • nullptr +
  • ottiP
  • pajachiet +
  • pilkibun +
  • pmaxey83 +
  • raguiar2 +
  • ratijas +
  • rbenes +
  • realead +
  • robbuckley +
  • saurav2608 +
  • shawnbrown +
  • sideeye +
  • ssikdar1
  • sudhir mohanraj +
  • svenharris +
  • syutbai +
  • tadeja +
  • tamuhey +
  • testvinder +
  • thatneat
  • tmnhat2001
  • tomascassidy +
  • tomneep
  • topper-123
  • vkk800 +
  • willweil +
  • winlu +
  • yehia67 +
  • yhaque1213 +
  • ym-pett +
  • yrhooke +
  • ywpark1 +
  • zertrin
  • zhezherun +