What is .NET, and why should you choose it?

.NET Team

.NET has changed a lot since we kicked off the fast-moving .NET open-source and cross-platform project. We’ve re-thought and refined the platform, adding new low-level capabilities designed for performance and safety, paired with higher-level productivity-focused features. Span<T>, hardware intrinsics, and nullable reference types are examples. We’re kicking off a new “.NET Design Point” blog series to explore the fundamentals and design choices that define today’s .NET platform, and how they benefit the code you are writing now.

This first post in the series provides a breadth overview of the pillars and the design-point of the platform. It describes “what you get” at a foundational level when you choose .NET and is intended to be a sufficient and facts-focused framing that you can use to describe the platform to others. Subsequent posts will go into more detail on these same topics since this post doesn’t quite do any of these features justice. This post doesn’t describe tools, like Visual Studio, nor does it cover higher-level libraries and application models like those provided by ASP.NET.

Follow up posts:

Before getting into the details, it is worth talking about .NET usage. It is used by millions of developers, to create cloud, client, and other apps on multiple operating systems and chip architectures. It is also run in some well-known places, like Azure, StackOverflow, and Unity. It is common to find .NET used in companies of all sizes, but particularly larger ones. In many places, it is a good technology to know to get a job.

.NET design point

The .NET platform stands for Productivity, Performance, Security, and Reliability. The balance .NET strikes between these values is what makes it attractive.

The .NET design point can be boiled down to being effective and efficient in both the safe domain (where everything is productive) and in the unsafe domain (where tremendous functionality exists). .NET is perhaps the managed environment with the most built-in functionality, while also offering the lowest cost to interop with the outside world, with no tradeoff between the two. In fact, many features exploit this seamless divide, building safe managed APIs on the raw power and capability of the underlying OS and CPU.

We can expand on the design point a bit more:

  • Productivity is full-stack with runtime, libraries, language, and tools all contributing to developer user experience.
  • Safe code is the primary compute model, while unsafe code enables additional manual optimizations.
  • Static and dynamic code are both supported, enabling a broad set of distinct scenarios.
  • Native code interop and hardware intrinsics are low cost and high-fidelity (raw API and instruction access).
  • Code is portable across platforms (OS, chip architecture), while platform targeting enables specialization and optimization.
  • Adaptability across programming domains (cloud, client, gaming) is enabled with specialized implementations of the general-purpose programming model.
  • Industry standards like OpenTelemetry and gRPC are favored over bespoke solutions.

The pillars of the .NET Stack

The runtime, libraries, and languages are the pillars of the .NET stack. Higher-level components, like .NET tools and app stacks like ASP.NET Core, build on top of these pillars. The pillars have a symbiotic relationship, having been designed and built together by a single group (Microsoft employees and the open source community), where individuals work on and inform multiple of these components.

C# is object-oriented and the runtime supports object orientation. C# requires garbage collection and the runtime provides a tracing garbage collector. In fact, it would be impossible to port C# (in its complete form) to a system without garbage collection. The libraries (and also the app stacks) shape those capabilities into concepts and object models that enable developers to productively write algorithms in intuitive workflows.

C# is a modern, safe, and general-purpose programming language that spans from high-level features such as data-oriented records to low-level features such as function pointers. It offers static typing and type- and memory-safety as baseline capabilities, which simultaneously improves developer productivity and code safety. The C# compiler is also extensible, supporting a plug-in model that enables developers to augment the system with additional diagnostics and compile-time code generation.

A number of C# features have influenced or were influenced by state of the art programming languages. For example, C# was the first mainstream language to introduce async and await. At the same time, C# borrows concepts first introduced in other programming languages, for example by adopting functional approaches such as pattern matching and primary constructors.

The core libraries expose thousands of types, many of which integrate with and fuel the C# language. For example, C#’s foreach enables enumerating arbitrary collections, with pattern-based optimizations that enable collections like List<T> to be processed simply and efficiently. Resource management may be left up to garbage collection, but prompt cleanup is possible via IDisposable and direct language support in using.

String interpolation in C# is both expressive and efficient, integrated with and powered by implementations across core library types like string, StringBuilder, and Span<T>. And language-integrated query (LINQ) features are powered by hundreds of sequence-processing routines in the libraries, like Where, Select, and GroupBy, with an extensible design and implementations that support both in-memory and remote data sources. The list goes on, and what’s integrated into the language directly only scratches the surface of the functionality exposed as part of the core .NET libraries, from compression to cryptography to regular expressions. A comprehensive networking stack is a domain of its own, spanning from sockets to HTTP/3. Similarly, the libraries support processing a myriad of formats and languages like JSON, XML, and tar.

The .NET runtime was initially referred to as the “Common Language Runtime (CLR)”. It continues to support multiple languages, some maintained by Microsoft (e.g. C#, F#, Visual Basic, C++/CLI, and PowerShell) and some by other organizations (e.g. Cobol, Java, PHP, Python, Scheme). Many improvements are language-agnostic, which raises all boats.

Next, we’re going to look at the various platform characteristics that they deliver together. We could detail each of these components separately, but you’ll soon see that they cooperate in delivering on the .NET design point. Let’s start with the type system.

Type system

The .NET type system offers significant breadth, catering somewhat equally to safety, descriptiveness, dynamism, and native interop.

First and foremost, the type system enables an object-oriented programming model. It includes types, (single base class) inheritance, interfaces (including default method implementations), and virtual method dispatch to provide a sensible behavior for all the type layering that object orientation allows.

Generics are a pervasive feature that allow specializing classes to one or more types. For example, List<T> is an open generic class, while instantiations like List<string> and List<int> avoid the need for separate ListOfString and ListOfInt classes or relying on object and casting as was the case with ArrayList. Generics also enable creating useful systems across disparate types (and reducing the need for a lot of code), like with Generic Math.

Delegates and lambdas enable passing methods as data, which makes it easy to integrate external code within a flow of operations owned by another system. They are a kind of “glue code” and their signatures are often generic to allow broad utility.

app.MapGet("/Product/{id}", async (int id) =>
{
    if (await IsProductIdValid(id))
    {
        return await GetProductDetails(id);
    }

    return Products.InvalidProduct;
});

This use of lambdas is part of ASP.NET Core Minimal APIs. It enables providing an endpoint implementation directly to the routing system. In more recent versions, ASP.NET Core makes more extensive use of the type system.

Value types and stack-allocated memory blocks offer more direct, low-level control over data and native platform interop, in contrast to .NET’s GC-managed types. Most of the primitive types in .NET, like integer types, are value types, and users can define their own types with similar semantics.

Value types are fully supported through .NET’s generics system, meaning that generic types like List<T> can provide flat, no-overhead memory representations of value type collections. In addition, .NET generics provide specialized compiled code when value types are substituted, meaning that those generic code paths can avoid expensive GC overhead.

byte magicSequence = 0b1000_0001;
Span<byte> data = stackalloc byte[128];
DuplicateSequence(data[0..4], magicSequence);

This code results in stack-allocated values. The Span<byte> is a safe and richer version of what would otherwise be a byte*, providing a length value (with bounds checking) and convenient span slicing.

Ref types and variables are a sort of mini-programming model that offers lower-level and lighter-weight abstractions over type system data. This includes Span<T>. This programming model is not general purpose, including significant restrictions to maintain safety.

internal readonly ref T _reference;

This use of ref results in copying a pointer to the underlying storage rather than copying the data referenced by that pointer. Value types are “copy by value” by default. ref provides a “copy by reference” behavior, which can provide significant performance benefits.

Automatic memory management

The .NET runtime provides automatic memory management via a garbage collector (GC). For any language, its memory management model is likely its most defining characteristic. This is true for .NET languages.

Heap corruption bugs are notoriously hard to debug. It’s not uncommon that engineers spend many weeks if not months tracking these down. Many languages use a garbage collector as a user friendly way of eliminating these bugs because the GC ensures correct object lifetimes. Typically, GCs free memory in batches to operate efficiently. This incurs pauses that may not be suitable if you have very tight latency requirements, and the memory usage would be higher. GCs tend to have better memory locality and some are capable of compacting the heap making it less prone to memory fragmentation.

.NET has a self-tuning, tracing GC. It aims to deliver “hands off” operation in the general case while offering configuration options for more extreme workloads. The GC is the result of many years of investment, improving and learning from many kinds of workloads.

Bump pointer allocation — objects are allocated by incrementing an allocation pointer by the size needed (instead of finding space in segregated free blocks) so those allocated together tend to stay together. And since they are often accessed together this enables better memory locality which is important for performance.

Generational collections — it’s extremely common that object lifetimes follow the generational hypothesis, that an object either lives for very long or dies very quickly. So it’s much more efficient for a GC to only collect memory occupied by ephemeral objects most of time it runs (called ephemeral GCs), instead of having to collect the whole heap (called full GCs) every time it runs.

Compaction — the same amount of free space in larger and fewer chunks is more useful than in smaller and more chunks. During a compacting GC, survived objects are moved together so larger free spaces can be formed. This is harder to implement than a non-moving GC as it will need to update references to these moved objects. The .NET GC is dynamically tuned to perform compaction only when it determines the reclaimed memory is worth the GC cost. This means ephemeral collections are often compacting.

Parallel — GC work can run on a single thread or on multiple threads. The Workstation flavor does GC work on a single thread while the Server flavor does it on multiple GC threads so that it can finish much faster. The Server GC can also accommodate a larger allocation rate as there are multiple heaps the application can allocate on instead of only one, so it’s very good for throughput.

Concurrent — doing GC work while user threads are paused — called Stop-The-World — makes the implementation simpler but the length of these pauses may be unacceptable. .NET offers a concurrent flavor to mitigate that issue.

Pinning — the .NET GC supports object pinning, which enables zero-copy interop with native code. This capability enables high-performance and high-fidelity native interop, with limited overhead for the GC.

Standalone GC — a standalone GC with a different implementation can be used (specified via config and satisfying interface requirements). This makes investigations and trying out new features much easier.

Diagnostics — The GC provides rich information about memory and collections, structured in a way that allows you to correlate data with the rest of the system. For example, you can evaluate the GC impact of your tail latency by capturing GC events and correlating them with other events like IO to calculate how much GC is contributing vs other factors, so you can direct your efforts to the right components.

Safety

Programming safety has been one of the top topics of the last decade. It is an inherent component of a managed environment like .NET.

Forms of safety:

  • Type safety — An arbitrary type cannot be used in place of another, avoiding undefined behavior.
  • Memory safety — Only allocated memory is ever used, for example a variable either references a live object or is null.
  • Concurrency or thread safety — Shared data cannot be accessed in a way that would result in undefined behavior.

Note: The US Federal government recently published guidance on the importance of memory safety.

.NET was designed as a safe platform from its initial design. In particular, it was intended to enable a new generation of web servers, which inherently need to accept untrusted input in the world’s most hostile computing environment (the Internet). It is now generally accepted that web programs should be written in safe languages.

Type safety is enforced by a combination of the language and the runtime. The compiler validates static invariants, such as assigning unlike types — for example, assigning string to Stream — which will produce compiler errors. The runtime validates dynamic invariants, such as casting between unlike types, which will produce an InvalidCastException.

Memory safety is provided largely by cooperation between a code generator (like a JIT) and a garbage collector. Variables either reference live objects, are null, or are out of scope. Memory is auto-initialized by default such that new objects do not use uninitialized memory. Bounds checking ensures that accessing an element with an invalid index will not allow reading undefined memory — often caused by off-by-one errors — but instead will result in a IndexOutOfRangeException.

null handling is a specific form of memory safety. Nullable reference types is a C# language and compiler feature that statically identifies code that is not safely handling null. In particular, the compiler warns you if you dereference a variable that might be null. You can also disallow null assignment so the compiler warns you if you assign a variable from a value that might be null. The runtime has a matching dynamic validation feature that prevents null references from being accessed, by throwing NullReferenceException.

This feature relies on nullable attributes in the library. It also relies on their exhaustive application within the libraries and app stacks such that user code can be provided with accurate results from static analysis tools.

string? SomeMethod() => null;
string value = SomeMethod() ?? "default string";

This code is considered null-safe by the C# compiler since null use is declared and handled, in part by ??, the null coalescing operator. The value variable will always be non-null, matching its declaration.

There is no built-in concurrency safety in .NET. Instead, developers need to follow patterns and conventions to avoid undefined behavior. There are also analyzers and other tools in the .NET ecosystem that provide insight into concurrency issues. And the core libraries include a multitude of types and methods that are safe to be used concurrently, for example concurrent collections that support any number of concurrent readers and writers without risking data structure corruption.

The runtime exposes safe and unsafe code models. Safety is guaranteed for safe code, which is the default, while developers must opt-in to using unsafe code. Unsafe code is typically used to interop with the underlying platform, interact with hardware, or to implement manual optimizations for performance critical paths.

A sandbox is a special form of safety that provides isolation and restricts access between components. We rely on standard isolation technologies, like processes (and CGroups), virtual machines, and Wasm (with their varying characteristics).

Error handling

Exceptions are the primary error handling model in .NET. Exceptions have the benefit that error information does not need to be represented in method signatures or handled by every method.

The following code demonstrates a typical pattern:

try
{
    var lines = await File.ReadAllLinesAsync(file);
    Console.WriteLine($"The {file} has {lines.Length} lines.");
}
catch (Exception e) when (e is FileNotFoundException or DirectoryNotFoundException)
{
    Console.WriteLine($"{file} doesn't exist.");
}

Proper exception handling is essential for application reliability. Expected exceptions can be intentionally handled in user code, otherwise an app will crash. A crashed app is more reliable and diagnosable than an app with undefined behavior.

Exceptions are thrown from the point of an error and automatically collect additional diagnostic information about the state of the program that is used with interactive debugging, application observability, and post-mortem debugging. Each of these diagnostic approaches rely on having access to rich error information and application state to diagnose problems.

Exceptions are intended for rare situations. This is, in part, because they have a relatively high performance cost. They are not intended to be used for control flow, even though they are sometimes used that way.

Exceptions are used (in part) for cancellation. They enable efficiently halting execution and unwinding a callstack that had work in progress once a cancellation request is observed.

try 
{ 
    await source.CopyToAsync(destination, cancellationToken); 
} 
catch (OperationCanceledException) 
{ 
    Console.WriteLine("Operation was canceled"); 
}

.NET design patterns include alternative forms of error handling for situations when the performance cost of exceptions is prohibitive. For example, int.TryParse returns a bool, with an out parameter containing the parsed valid integer upon success. Dictionary<TKey, TValue>.TryGetValue offers a similar model, returning a valid TValue type as an out parameter in the true case.

Error handling, and diagnostics more generally, is implemented via low-level runtime APIs, higher-level libraries, and tools. These capabilities have been designed to support newer deployment options like containers. For example, dotnet-monitor can egress runtime data from an app to a listener via a built-in diagnostic-oriented web server.

Concurrency

Support for doing multiple things at the same time is fundamental to practically all workloads, whether it be client applications doing background processing while keeping the UI responsive, services handling thousands upon thousands of simultaneous requests, devices responding to a multitude of simultaneous stimuli, or high-powered machines parallelizing the processing of compute-intensive operations. Operating systems provide support for such concurrency via threads, which enable multiple streams of instructions to be processed independently, with the operating system managing the execution of those threads on any available processor cores in the machine. Operating systems also provide support for doing I/O, with mechanisms provided for enabling I/O to be performed in a scalable manner with many I/O operations “in flight” at any particular time. Programming languages and frameworks can then provide various levels of abstraction on top of this core support.

.NET provides such concurrency and parallelization support at multiple levels of abstraction, both via libraries and deeply integrated into C#. A Thread class sits at the bottom of the hierarchy and represents an operating system thread, enabling developers to create new threads and subsequently join with them. ThreadPool sits on top of threads, allowing developers to think in terms of work items that are scheduled asynchronously to run on a pool of threads, with the management of those threads (including the addition and removal of threads from the pool, and the assignment of work items to those threads) left up to the runtime. Task then provides a unified representation for any operations performed asynchronously and that can be created and joined with in multiple ways; for example, Task.Run allows for scheduling a delegate to run on the ThreadPool and returns a Task to represent the eventual completion of that work, while Socket.ReceiveAsync returns a Task<int> (or ValueTask<int>) that represents the eventual completion of the asynchronous I/O to read pending or future data from a Socket. A vast array of synchronization primitives are provided for coordinating activities synchronously and asynchronously between threads and asynchronous operations, and a multitude of higher-level APIs are provided to ease the implementation of common concurrency patterns, e.g. Parallel.ForEach and Parallel.ForEachAsync make it easier to process all elements of a data sequence in parallel.

Asynchronous programming support is also a first-class feature of the C# programming language, which provides the async and await keywords that make it easy to write and compose asynchronous operations while still enjoying the full benefits of all the control flow constructs the language has to offer.

Reflection

Reflection is a “programs as data” paradigm, allowing one part of a program to dynamically query and/or invoke another, in terms of assemblies, types and members. It is particularly useful for late-bound programming models and tools.

The following code uses reflection to find and invoke types.

foreach (Type type in typeof(Program).Assembly.DefinedTypes)
{
    if (type.IsAssignableTo(typeof(IStory)) &&
        !type.IsInterface)
    {
        IStory? story = (IStory?)Activator.CreateInstance(type);
        if (story is not null)
        {
            var text = story.TellMeAStory();
            Console.WriteLine(text);
        }
    }
}

interface IStory
{
    string TellMeAStory();
}

class BedTimeStore : IStory
{
    public string TellMeAStory() => "Once upon a time, there was an orphan learning magic ...";
}

class HorrorStory : IStory
{
    public string TellMeAStory() => "On a dark and stormy night, I heard a strange voice in the cellar ...";
}

This code dynamically enumerates all of an assembly’s types that implement a specific interface, instantiates an instance of each type, and invokes a method on the object via that interface. The code could have been written statically instead, since it’s only querying for types in an assembly it’s referencing, but to do so it would need to be handed a collection of all of the instances to process, perhaps as a List<IStory>. This late-bound approach would be more likely to be used if this algorithm loaded arbitrary assemblies from an add-ins directory. Reflection is often used in scenarios like that, when assemblies and types are not known ahead of time.

Reflection is perhaps the most dynamic system offered in .NET. It is intended to enable developers to create their own binary code loaders and method dispatchers, with semantics that can match or diverge from static code policies (defined by the runtime). Reflection exposes a rich object model, which is straightforward to adopt for narrow use cases but requires a deeper understanding the .NET type system as scenarios get more complex.

Reflection also enables a separate mode where generated IL byte code can be JIT-compiled at runtime, sometimes used to replace a general algorithm with a specialized one. It is often used in serializers or object relational mappers once the object model and other details are known.

Compiled binary format

Apps and libraries are compiled to a standardized cross-platform bytecode in PE/COFF format. Binary distribution is foremost a performance feature. It enables apps to scale to larger and larger numbers of projects. Each library includes a database of imported and exported types, referred to as metadata, which serves a significant role for both development operations and for running the app.

Compiled binaries include two main aspects:

  • Binary bytecode — terse and regular format that skips the need to parse textual source after compilation by a high-level language compiler (like C#).
  • Metadata — describes imported and exported types, including the location of the byte code for a given method.

For development, tools can efficiently read metadata to determine the set of types exposed by a given library and which of those types implement certain interfaces, for example. This process makes compilation fast and enables IDEs and other tools to accurately present lists of types and members for a given context.

For runtime, metadata enables libraries to be loaded lazily, and method bodies even more so. Reflection (discussed later) is the runtime API for metadata and IL. There are other more appropriate APIs for tools.

The IL format has remained backwards-compatible over time. The latest .NET version can still load and execute binaries produced with .NET Framework 1.0 compilers.

Shared libraries are typically distributed via NuGet packages. NuGet packages, with a single binary, can work on any operating system and architecture, by default, but can also be specialized to provide specific behavior in specific environments.

Code generation

.NET bytecode is not a machine-executable format, but it needs to be made executable by some form of code generator. This can be achieved by ahead-of-time (AOT) compilation, just-in-time (JIT) compilation, interpretation, or transpilation. In fact, these are all used today in various scenarios.

.NET is most known for JIT compilation. JITs compile methods (and other members) to native code while the application is running and only as they are needed, hence the “just in time” name. For example, a program might only call one of several methods on a type at runtime. A JIT can also take advantage of information that is only available at runtime, like values of initialized readonly static variables or the exact CPU model that the program is running on, and can compile the same method multiple times in order to optimize each time for different goals and with learnings from previous compilations.

JITs produce code for a given operating system and chip architecture. .NET has JIT implementations that support, for example, Arm64 and x64 instruction sets, and Linux, macOS, and Windows operating systems. As a .NET developer, you don’t have to worry about the differences between CPU instruction sets and operating system calling conventions. The JIT takes care of producing the code that the CPU wants. It also knows how to produce fast code for each CPU, and OS and CPU vendors often help us do exactly that.

AOT is similar except that the code is generated before the program is run. Developers choose this option because it can significantly improve startup time by eliminating the work done by a JIT. AOT-built apps are inherently operating system and architecture specific, which means that extra steps are required to make an app run in multiple environments. For example, if you want to support Linux and Windows and Arm64 and x64, then you need to build four variants (to allow for all the combinations). AOT code can provide valuable optimizations, too, but not as many as the JIT in general.

We’ll cover interpretation and transpilation in a later post, however, they also play critical roles in our ecosystem.

One of the code-generator optimizations is intrinsics. Hardware intrinsics are an example where .NET APIs are directly translated into CPU instructions. This has been used pervasively throughout .NET libraries for SIMD instructions.

Interop

.NET has been explicitly designed for low-cost interop with native libraries. .NET programs and libraries can seamlessly call low-level operating system APIs or tap into the vast ecosystem of C/C++ libraries. The modern .NET runtime is focused on providing low-level interop building blocks such as the ability to call native methods via function pointers, exposing managed methods as unmanaged callbacks or customized interface casting. .NET is also continually evolving in this area and in .NET 7 released source generated solutions that further reduced overhead and were AOT friendly.

The following demonstrates the efficiency of C# functions pointers with the LibraryImport source generator introduced in .NET 7 (this source generator support layers on top of the DllImport support that’s existed since the beginning of .NET).

// Using a function pointer avoids a delegate allocation.
// Equivalent to `void (*fptr)(int) = &RegisterCallback;` in C
delegate* unmanaged<int, void> fptr = &RegisterCallback;
RegisterCallback(fptr);

[UnmanagedCallersOnly]
static void Callback(int a) => Console.WriteLine($"Callback:  {a}");

[LibraryImport("...", EntryPoint = "RegisterCallback")]
static partial void RegisterCallback(delegate* unmanaged<int, void> fptr);

Independent packages provide higher-level domain-specific interop solutions by taking advantage of these low-level building blocks, for example ClangSharp, Xamarin.iOS & Xamarin.Mac, CsWinRT, CsWin32 and DNNE.

These new features don’t mean built-in interop solutions like built-in runtime managed/unmanaged marshalling or Windows COM interop aren’t useful — we know they are and that people have come to rely upon them. Those features that have been historically built into the runtime continue to be supported in the .NET runtime. However, they are for backward compatibility only, with no plans to evolve them further. All future investments will be focused on the interop building blocks and in the domain-specific solutions that they enable.

Binary distributions

The .NET Team at Microsoft maintains several binary distributions, more recently supporting Android, iOS, and Web Assembly. The team uses a variety of techniques to specialize the codebase for each one of these environments. Most of the platform is written in C#, which enables porting to be focused on a relatively small set of components.

The community maintains another set of distributions, largely focused on Linux. For example,.NET is included in Alpine Linux, Fedora, Red Hat Enterprise Linux, and Ubuntu.

The community has also extended .NET to run on other platforms. Samsung ported .NET for their Arm-based Tizen platform. Red Hat and IBM ported .NET to LinuxONE/s390x. Loongson Technology ported .NET to LoongArch. We hope and expect that new partners will port .NET to other environments.

Unity Technologies has started a multi-year initiative to modernize their .NET runtime.

The .NET open source project is maintained and structured to enable individuals, companies, and other organizations to collaborate together in a traditional upstream model. Microsoft is the steward of the platform, providing both project governance and project infrastructure (like CI pipelines). The Microsoft team collaborates with organizations to help make them successful using and/or porting .NET. The project has a broad upstreaming policy, which includes accepting changes that are unique to a given distribution.

A major focus is the source-build project, which multiple organizations use to build .NET according to typical distro rules, for example Canonical (Ubuntu). This focus has expanded more recently with the addition of a Virtual Mono Repo (VMR). The .NET project is composed of many repos, which aids .NET developer efficiency but makes it harder to build the a complete product. The VMR solves that problem.

Summary

We’re several versions into the modern .NET era, having recently released .NET 7. We thought it would be useful if we summarized what we’ve been striving to build — at the lowest levels of the platform — since .NET Core 1.0. While we’ve clearly kept to the spirit of the original .NET, the result is a new platform that strikes a new path and offers new and considerably more value to developers.

Let’s end where we started. .NET stands for four values: Productivity, Performance, Security and Reliability. We are big believers that developers are best served when different language platforms offer different approaches. As a team, we seek to offer high productivity to .NET developers while providing a platform that leads in performance, security and reliability.

We plan to add more posts in this series. Which topics would you like to see addressed first? Please tell us in the comments. Would you like more of this “big picture” content?

If you want more of this content, you might check out Introduction to the Common Language Runtime (CLR).

This post was written by Jan Kotas, Rich Lander, Maoni Stephens, and Stephen Toub, with the insight and review of our colleagues on the .NET team.

39 comments

Discussion is closed. Login to edit/delete existing comments.

  • W L 0

    Great article, looking forward to the next post.

    the code present in Interop part

    delegate* unmanaged fptr = &RegisterCallback;

    may should be

    delegate* unmanaged fptr = &Callback;
    • Richard LanderMicrosoft employee 0

      Thanks for the report. Fixed!

  • Carlos Santos 0

    Good article!

  • Emmanuel Adebiyi 0

    Great read!

  • Yawei Wang 0

    Great article!

  • Paulo Pinto 4

    While a very interesting article to read, I keep wondering if this isn’t some kind of white paper for decisions makers that are nowadays picking other ecosystems instead of .NET, regardless of the open source and cross platform efforts.

    Some food for though regarding future posts on “Why .NET and not something else”.

    Some of the reasons that pain me as .NET user are the product decisions that favor Visual Studio in detriment of the developer experience on Visual Studio for Mac and Visual Studio Code, making many of us shell out to Rider licenses for similar developer experiences on Mac and GNU/Linux.

    The political ongoing issues between .NET development and Windows / C++, many times killing nice .NET products like XNA, and forcing us to also code in C++, because the teams responsible for those decisions don’t see a value supporting .NET bindings. Like the WinUI team that keeps pushing APIs that are C++/WinRT first, like it happened with Windows 11 widgets. If we are lucky maybe some group of people decide to put the effort for free, that a company with budget to acquire Activision doesn’t feel like.

    The ongoing issues with multiple UI frameworks, none of them supports GNU/Linux, MAUI supports macOS by using an iOS porting framework, while on Windows we get a fragmentation of development efforts, all of them requiring code rewrites, different APIs, different XAML, hardly anything portable across the GUI Framework workloads.

    The current language strategy that makes quite clear that CLR actually means C# Language Runtime, with F# and VB left to just keep working on the bare minimum. An article that ignored C++/CLI, while this one thankfully still acknowledges its existence, although the Developer Community tickets has quite a few of them regarding how C++/CLI is left behind on ISO C++ support.

    Sorry about the rant, but these are the kind of issues that we in the community care about when someone ask us “Why .NET and not somehting else?”.

    • Andrew Witte 0

      I agree with you. These are mine.

      • C# evolution & adoption is crippled by the JIT being horrible for platforms requiring AOT like WASM, iOS, etc

      • IL spec can’t change fast enough holding C# back from improving generics system or adding features requiring IL or JIT changes

      • A lang should NOT be built around a JIT. A JIT should be built around a lang. This effects how your standard libs are done etc. Which MS has re-done so many times at this point (some for this reason) causing a lot of fragmented eco systems within C# & a huge amount of confusion over the years.

      • Avalonia is the only UI system done correct in terms of an actual new WPF like XAML write once look the same everywhere solution using a correctly done agnostic rendering api (aka Skia). Yet I still don’t think MS funds it but I might be wrong.

      • VS-Code uses a different solution approach than VS does. VS in general is not a cross platform app. Its just re-implemented multiple times by different people & companies. All this adds up to a fragmented IDE environment where its very hard to make tools around.

      I’ve followed & heavily used C# since .NETFW 3.0 & while I do like its not a Windows only runtime (outside Mono) so many other negatives have come after MS bought Mono that I just find myself just as annoyed if not more in 2023.

      The vision of C# is 99% focused on bloated web-apps it feels like (what happened to battery saving & performant apps?). Outside ASP.NET its not actually solving many problems I can see.

      • Richard LanderMicrosoft employee 2

        Can you elaborate on some of these thoughts? They don’t come up in our analysis so I’m curious what is motivating them for you.

        In particular:

        • “IL spec can’t change fast enough …”
        • “A lang should NOT be built around a JIT. A JIT should be built around a lang. …”

        You are correct that we haven’t updated the IL spec in quite some time. We’ve made progress without that and it is rarely raised as a key barrier to innovation. Are there some specific features that you want that we haven’t prioritized?

        C# is not tied to a JIT. JavaScript might be a better example of that. That’s not a judgement statement on JavaScript, just that it is much more dynamic. In fact, it started with interpretation (and sometimes still requires it). However, JavaScript is quite popular.

        • Andrew Witte 0

          “IL spec can’t change fast enough …”
          — Variadic Generics might be an example. The proposed alterative of “Foo(params ReadOnlyList values)” is not the same as it only supports a single type in that array. If “object-type” was used casting could cause boxing allocs. The IL needs to be able to represent these concepts “variadic” concepts & compiler condition in the IL for them. It also sounds like F# had to hack around these limitations of IL to implement stuff like this. There are ECS patterns that are very hard to handle in C# atm because this doesn’t exist.

          “A lang should NOT be built around a JIT. A JIT should be built around a lang. …”
          — IMO a lang design should be fundamentally portable so it can be flashed and executed on a MCU or be runnable AOT only environments without interpreters (as Mono does on iOS or WASM for example). The JIT should be (optional) in that it offers extended functionality of a lang/frameworks for special design pattern cases. OTHERWISE the core frameworks of that lang start to use dynamic code generation which is either non-portable or interpreted & slow instead of compiler generated patterns that end up being portable and normally faster (these are also just as easy to structure with the right lang features).

          • Richard LanderMicrosoft employee 1

            We can change the IL spec if we need to. It’s likely that the scenarios that would benefit from it just haven’t been prioritized.

            We have the Native AOT project, so it’s easy to do a test on what works well with it and what doesn’t. C# and most low-level .NET libraries work well with it. Even Reflection works with it. ASP.NET Core currently doesn’t work well with it, but that’s more due to trimming than code generation. Clearly, we have more work to do, but I think C# passes your test of “JIT is optional”. It doesn’t have a bias to any code generation strategy. I hope the post didn’t suggest that it does.

  • Steve Naidamast 1

    I have always enjoyed working with the .NET environments since it was first released commercially in 2001.

    However, since 2010 when ASP.NET MVC first emerged, Microsoft has done more to literally destroy what they created instead of simply refining what already existed.

    There was and still is really nothing wrong with the original .NET Frameworks, which I continue to use regularly in my own development endeavors. The new Core Frameworks appear to be seriously scaled down re-writes of the original frameworks offering far less than what we originally had.

    This has caused a massive effort by the development community to make up for such losses as WCF, Silverlight, and other sub-systems that the original frameworks always offered. In terms of web development, massive increases in complexity have been provided in exchange for the more compartmentalized system of ASP.NET WebForms.

    There has never been anything seriously wrong with WebForms and software engineers have demonstrated this over the years. But the massive confusion that the new web development environments bring appears to be more acceptable than the more simplistic WebForms environment.

    Sure there are things wrong with WebForms but so there are with the new environments.

    These changes do not make for secure and efficient web development or development of any other kind that have been so adversely affected.

    Even the .NET languages have been modified to levels of absurdity with increasingly arcane syntax with the idea that newer functionality can now be worked with. Unless you are one who actually needs such functionality is any of this really necessary in that entire environments are changed instead of simply adding some new sub-modules to these environments? Most of us do not require such processes and can easily still provide quality applications without the fanfare.

    So to even ask the question that this piece poses indicates that maybe someone in Microsoft has begun to see the fallacies in promoting these new environments and changes. With all the complaints I have read over the years among .NET professionals regarding the messes that they now find themselves in with .NET, is it any wonder that many may be considering less complex environments to develop with?

    • Chris Warrick 3

      Silverlight is dead, because browser plugins are dead, and Silverlight was never able to match Flash’s success. WCF is dead, because nobody likes SOAP.

      As for WebForms, there are tons of things wrong with it. WebForms have really weird state management patterns, and is often confusing with its ideas about the client/server split. WebForms tends to break basic interactions, such as making links require JS with javascript: links (which cannot be opened in another tab), and requiring a page reload for things that more normal frameworks handle without it (such as displaying some controls based on a radio button). ASP.NET MVC and ASP.NET Core align much more closely with modern development practices, can be understood by web developers coming from elsewhere, are extensible with plain JS, and allow open REST-ful APIs to be built.

      • Daite Dve 1

        Quite opposite! WebAssembly is the new way of pluging. So Silverlight has NOTHING wrong compared to any modern sht.

        • Christopher Haws 1

          WebAssembly is an adopted web standard and becoming even more than just a web standard (Docker+Wasm). Browser plugins were never standardized and had to be installed by users and didn’t run within the sandboxed environment of the browser. They are 100% different from the ground up.

      • chrisxfire 0

        “WCF is dead, because nobody likes SOAP.”

        Truer words may never have been spoken.

    • Richard LanderMicrosoft employee 2

      I respect your viewpoint. I’m not seeing you address the bigger picture of performance, hosting costs, and climate.

      ASP.NET Core is tremendously more efficient than WebForms. We’re not talking about 2x but larger multiples. People get to make their own decisions and tradeoffs, however, if you prioritize performance, cost, and climate, then modern solutions like ASP.NET Core are an obvious and important choice.

      We are well into the project of moving Microsoft services from .NET Framework to .NET Core. The teams that have moved are collectively saving tens of millions of dollars a year in hosting costs. That’s because of massive performance wins in .NET Core. These wins are also observable by customers since latency is significantly improved in addition to throughput. Last, for organization that are required to reduce carbon use, they need to strongly consider moving away from WebForms.

      Here’s a great example from a tweet I had handy: https://twitter.com/Nick_Craver/status/1245027023371853825

      • anonymous 0

        this comment has been deleted.

        • anonymous 0

          this comment has been deleted.

      • Paulo Pinto 0

        It doesn’t matter, because many products like Sitecore, SharePoint, SQL Server and Dynamics still depend heavily on .NET Framework.

        No idea what Microsoft plans to do with their own products on that list, but Sitecore is actually moving away from .NET, most of their new products are based in other stacks, because when companies are forced to rewrite, they also consider rewriting into something else.

        • Richard LanderMicrosoft employee 3

          Right. The same is true for many teams and companies we talk to. There is always a split of components that stay on .NET Framework vs move to .NET Core. Companies make choices given goals, constraints, and tradeoff decisions.

          Where performance is the highest priority, they move. You are also correct that some folks choose to rewrite in something else. That may because it has characteristics they need or because someone on the team wants to try a project in a newer language platforms.

          All of this does matter. As you yourself say, people are making decisions to achieve their goals. I applaud that.

          The purpose of this post is to demonstrate that the low-level .NET architecture aligns quite well with modern needs and paradigms. It has very much moved on since the .NET Framework days.

  • Eric Johnson 1

    Although a hobby programmer who was introduced to C/C++ then VBA within Excel, I’ve appreciated this overview. It definitely gives a better perspective on tools available as I rewrite my bespoke applications using .NET architecture to release on multiple platforms. Thank you for this read. I look forward to the next!

  • André R. 0

    .Net since it opened up with .Net Core and later has certainly improved the foundation of platform and the ecosystem a lot, especially it’s change to run on any major OS.

    1 But one thing that comes of as completely lacking is interoperability with industry solutions from other vendors, when coming from other open platforms this lack of first party support for such solutions seems odd.

    Main example in EF Core: Lack of first party support for industry SQL solutions like MySQL (including flavours like MariaDB (MySQL), Amazone Aurora, ..), PostgreSQL and Oracle

    2 Besides this the fragmentation with different flavours of Visual Studio variants seem counterintuitive for what .Net aims at in terms of multi platform support, hopefully it’s just a temporary situation, but if so clear communication on the plans/vision for the future would benefit everyone.

    • Victor Lee 3

      EF core does support MySQL and MariaDB (there are two different providers for them). There are also providers for PostgreSQL and Oracle. You’re right in that there is not one for Amazon Aurora specifically, but you can try the PostgreSQL one to see if it works for that.
      You can see the full list of providers here: https://learn.microsoft.com/en-us/ef/core/providers/?tabs=dotnet-core-cli

      • Greg Zapp 3

        In fact, the Npgsql(open source ADO.NET Data Provider for PostgreSQL) lead developer is on the Entity Framework team.

        It’s a fantastic project which even includes support for logical decode.

        • André R. 0

          Nice, I didn’t know that. I’ll have a closer look 👍

      • André R. 0

        You are right, there are (probably good) options, but my point is that they are supported by third party. In other languages / frameworks it’s supported directly with the OR/M / Framework / language in question. Typically the vendors behind these solutions help develop / maintain the given provider together with the maintainers (EF team in this case).

        Main technical benefit is that when there is a new release, no providers are left behind, they are all released with new major versions, also all providers are maintained with same code standard, feature set, performance aims, and support*

        *Also a business benefit: less providers to have to rely on, and better interoperability (a MySQL provider by EF team + third parties can explicitly support MariaDB and Aurora without much extra effort, something Oracle might not prioritise at all with its own provider for natural reasons)

        .Net is supported natively Linux and Mac, do you think it would have been been considered cross platform if this was provided by third party? (cough Mono cough 😉)

  • MgSam 1

    I’ve made this point many times, but as Microsoft is well aware, a key area of development interest the past years has been around data science. And C#/.NET remains a 2nd or 3rd rate platform to do data science. The main reason being because Microsoft refuses to commit 1 or 2 developers to work full-time on actually finishing the DataFrame that has been languishing in development h*ll for the past few years. It is beyond the pale that in 2023, the default (and only) built-in solution for working with tabular data is the 20-year-old DataTable, which was written before generics even existed in C#.

    .NET needs a first-class DataFrame that ships out-of-the-box. It needs to be supported in all the legacy System.Data / Microsoft.Data APIs.

    And Visual Studio needs to integrate the C# REPL with Visualizer support, so DataFrames can be created, easily inspected and include the ability to render graphs. Most devs don’t need or want Jupyter notebooks (which I haven’t seen any updates about in at least a year, so I’m guessing it’s become just yet another abandoned MS project).

    • chrisxfire 0

      As someone who tried some data science development on .NET and reverted back to Python—even though it performs far worse—I agree with this point.

  • Daite Dve 0

    Good article! It just confirms that I should stay on my old lovely .NET Framework 4.8 and WinForms/WPF.

    • Richard LanderMicrosoft employee 1

      I am glad you are happy.

      Much of this post applies to .NET Framework, too, for example on type and memory safety. Key details don’t apply.

    • chrisxfire 1

      Come to Core, Daite. It’s nice over here. We have cake.

Feedback usabilla icon