Microsoft has open-sourced this week the source code of MsQuic, the company's in-house library for handling network connections established via the new QUIC protocol.
QUIC stands for "Quick UDP Internet Connections." It is a new data transfer protocol that is currently being standardized by the Internet Engineering Task Force (IETF).
At a networking level, QUIC is a data transfer protocol similar to TCP, UDP, and SPDY.
Work on QUIC began in the early 2010s and was pioneered by Google, which wanted to create a faster and more performance-centric data transfer protocol to replace TCP.
At its core, QUIC is a mash-up that borrows principles and features from HTTP/2 (HTTP-over-SPDY), TCP, UDP, and TLS (for encryption). These features allow connections to be established faster and in a more secure manner.
MsQuic is a C library developed by Microsoft for the sole purpose of supporting QUIC data connections inside its products. It supports Windows and Linux platforms (Microsoft relies on Linux for some of its cloud infrastructure).
The library is set to be widely deployed at Microsoft as the primary method through which Microsoft products will handle QUIC connections, according to Daniel Havey, Program Manager at Microsoft. For example:
Windows will ship with MsQuic in the kernel to support various inbox features.
The Windows HTTP/3 stack is being built on top of MsQuic.
Microsoft 365 is testing a preview version of IIS using HTTP/3 to reduce tail loss latencies in the last mile.
.NET Core has built HTTP/3 support into Kestrel and HttpClient on top of MsQuic (available in the preview for the 5.0 release of .NET Core)
Havey also said that "several other Microsoft teams" are also testing MsQuic, with preview implementations to be announced later on.
"Microsoft is an active participant and driver of QUIC in the industry and is consequently open sourcing our implementation as a reference for others," Havey said in a blog post published yesterday.
"MsQuic brings performance and security improvements to many important networking scenarios. Our online services benefit the most from performance improvements like reduced tail latency and faster connection setup. Our connections will be able to seamlessly switch networks because they can survive IP address/port changes. This equates to better user experience on our edge devices," Havey said.