Support JDK 11 with Hadoop 2.7 Spark SQL will respect its own default format (i.e., parquet) when users do CREATE TABLE without USING or STORED AS clauses Enable Parquet nested schema pruning and nested pruning on expressions by default Add observable Metrics for Streaming queries Column pruning through nondeterministic expressions RecordBinaryComparator should check endianness when compared by long Improve parallelism for local shuffle reader in adaptive query execution Upgrade Apache Arrow to version 0.15.1 Various interval-related SQL support Add a mode to pin Python thread into JVM's Provide option to clean up completed files in streaming query