TypeScript: First Impressions

Today Microsoft announced TypeScript a typed superset of Javascript. This means that existing Javascript code can be gradually modified to add typing information to improve the development experience: both by providing better errors at compile time and by providing code-completion during development.

As a language fan, I like the effort, just like I pretty much like most new language efforts aimed at improving developer productivity: from C#, to Rust, to Go, to Dart and to CoffeeScript.

A video introduction from Anders was posted on Microsoft's web site.

The Pros

  • Superset of Javascript allows easy transition from Javascript to typed versions of the code.
  • Open source from the start, using the Apache License.
  • Strong types assist developers catch errors before the deploy the code, this is a very welcome addition to the developer toolchest. Script#, Google GWT and C# on the web all try to solve the same problem in different ways.
  • Extensive type inference, so you get to keep a lot of the dynamism of Javascript, while benefiting from type checking.
  • Classes, interfaces, visibility are first class citizens. It formalizes them for those of us that like this model instead of the roll-your-own prototype system.
  • Nice syntactic sugar reduces boilerplate code to explicit constructs (class definitions for example).
  • TypeScript is distributed as a Node.JS package, and it can be trivially installed on Linux and MacOS.
  • The adoption can be done entirely server-side, or at compile time, and requires no changes to existing browsers or runtimes to run the resulting code.

Out of Scope

Type information is erased when it is compiled. Just like Java erases generic information when it compiles, which means that the underling Javascript engine is unable to optimize the resulting code based on the strong type information.

Dart on the other hand is more ambitious as it uses the type information to optimize the quality of the generated code. This means that a function that adds two numbers (function add (a,b) { return a+b;}) can generate native code to add two numbers, basically, it can generate the following C code:

double add (double a, double b)
{
    return a+b;
}

While weakly typed Javascript must generated something like:

JSObject add (JSObject a, JSObject b)
{
    if (type (a) == typeof (double) &&
	type (b) == typeof (double))
	return a.ToDouble () + b.ToDouble ();
    else
	JIT_Compile_add_with_new_types ();
}

The Bad

The majority of the Web is powered by Unix.

Developers use MacOS and Linux workstations to write the bulk of the code, and deploy to Linux servers.

But TypeScript only delivers half of the value in using a strongly typed language to Unix developers: strong typing. Intellisense, code completion and refactoring are tools that are only available to Visual Studio Professional users on Windows.

There is no Eclipse, MonoDevelop or Emacs support for any of the language features.

So Microsoft will need to convince Unix developers to use this language merely based on the benefits of strong typing, a much harder task than luring them with both language features and tooling.

There is some basic support for editing TypeScript from Emacs, which is useful to try the language, but without Intellisense, it is obnoxious to use.

Posted on 01 Oct 2012 by Miguel de Icaza

Free Market Fantasies

This recording of a Q&A with Noam Chomsky in 1997 could be a Q&A session done last night about bailouts, corporate wellfare, and the various distractions that they use from keeping us in the dark, like caring about "fiscal responsibility".

Also on iTunes and Amazon.

Posted on 07 Sep 2012 by Miguel de Icaza

2012 Update: Running C# on the Browser

With our push to share the kernel of your software in reusable C# libraries and build a native experience per platform (iOS, Android, WP7 on phones and WPF/Windows, MonoMac/OSX, Gtk/Linux) one component that is always missing is what about doing a web UI that also shares some of the code.

Until very recently the answer was far from optimal, and included things like: put the kernel on the server and use some .NET stack to ship the HTML to the client.

Today there are two solid choices to run your C# code on the browser and share code between the web and your native UIs.

JSIL

JSIL will translate the ECMA/.NET Intermediate Language into Javascript and will run your code in the browser. JSIL is pretty sophisticated and their approach at running IL code on the browser also includes a bridge that allows your .NET code to reference web page elements. This means that you can access the DOM directly from C#.

You can try their Try JSIL page to get a taste of what is possible.

Saltarelle Compiler

The Saltarelle Compiler takes a different approach. It is a C# 4.0 compiler that generates JavaScript instead of generating IL. It is interesting that this compiler is built on top of the new NRefactory which is in turn built on top of our C# Compiler as a Service.

It is a fresh, new compiler and unlik JSIL it is limited to compiling the C# language. Although it is missing some language features, it is actively being developed.

This compiler was inspired by Script# which is a C#-look-alike language that generated Javascript for consuming on the browser.

Native Client

I left NativeClient out, which is not fair, considering that both Bastion and Go Home Dinosaurs are both powered by Mono running on Native Client.

The only downside with Native Client today is that it does not run on iOS or Android.

Posted on 06 Sep 2012 by Miguel de Icaza

What Killed the Linux Desktop

True story.

The hard disk that hosted my /home directory on my Linux machine failed so I had to replace it with a new one. Since this machine lives under my desk, I had to unplug all the cables, get it out, swap the hard drives and plug everything back again.

Pretty standard stuff. Plug AC, plug keyboard, plug mouse but when I got to the speakers cable, I just skipped it.

Why bother setting up the audio?

It will likely break again and will force me to go on a hunting expedition to find out more than I ever wanted to know about the new audio system and the drivers technology we are using.

A few days ago I spoke to Klint Finley from Wired who wrote the article titled OSX Killed Linux. The original line of questioning was about my opinion between Gnome 3's shell, vs Ubuntu's Unity vs Xfte as competing shells.

Personally, I am quite happy with Gnome Shell, I think the team that put it together did a great job, and I love how it enabled the Gnome designers -which historically only design, barely hack- to actually extend the shell, tune the UI and prototype things without having to beg a hacker to implement things for them. It certainly could use some fixes and tuning, but I am sure they will address those eventually.

What went wrong with Linux on the Desktop

In my opinion, the problem with Linux on the Desktop is rooted in the developer culture that was created around it.

Linus, despite being a low-level kernel guy, set the tone for our community years ago when he dismissed binary compatibility for device drivers. The kernel people might have some valid reasons for it, and might have forced the industry to play by their rules, but the Desktop people did not have the power that the kernel people did. But we did keep the attitude.

The attitude of our community was one of engineering excellence: we do not want deprecated code in our source trees, we do not want to keep broken designs around, we want pure and beautiful designs and we want to eliminate all traces of bad or poorly implemented ideas from our source code trees.

And we did.

We deprecated APIs, because there was a better way. We removed functionality because "that approach is broken", for degrees of broken from "it is a security hole" all the way to "it does not conform to the new style we are using".

We replaced core subsystems in the operating system, with poor transitions paths. We introduced compatibility layers that were not really compatible, nor were they maintained. When faced with "this does not work", the community response was usually "you are doing it wrong".

As long as you had an operating system that was 100% free, and you could patch and upgrade every component of your operating system to keep up with the system updates, you were fine and it was merely an inconvenience that lasted a few months while the kinks were sorted out.

The second dimension to the problem is that no two Linux distributions agreed on which core components the system should use. Either they did not agree, the schedule of the transitions were out of sync or there were competing implementations for the same functionality.

The efforts to standardize on a kernel and a set of core libraries were undermined by the Distro of the Day that held the position of power. If you are the top dog, you did not want to make any concessions that would help other distributions catch up with you. Being incompatible became a way of gaining market share. A strategy that continues to be employed by the 800 pound gorillas in the Linux world.

To sum up: (a) First dimension: things change too quickly, breaking both open source and proprietary software alike; (b) incompatibility across Linux distributions.

This killed the ecosystem for third party developers trying to target Linux on the desktop. You would try once, do your best effort to support the "top" distro or if you were feeling generous "the top three" distros. Only to find out that your software no longer worked six months later.

Supporting Linux on the desktop became a burden for independent developers.

But at this point, those of us in the Linux world still believed that we could build everything as open source software. The software industry as a whole had a few home runs, and we were convinced we could implement those ourselves: spreadsheets, word processors, design programs. And we did a fine job at that.

Linux pioneered solid package management and the most advance software updating systems. We did a good job, considering our goals and our culture.

But we missed the big picture. We alienated every third party developer in the process. The ecosystem that has sprung to life with Apple's OSX AppStore is just impossible to achieve with Linux today.

The Rise of OSX

When OSX was launched it was by no means a very sophisticated Unix system. It had an old kernel, an old userland, poor compatibility with modern Unix, primitive development tools and a very pretty UI.

Over time Apple addressed the majority of the problems with its Unix stack: they improved compatibility, improved their kernel, more open source software started working and things worked out of the box.

The most pragmatic contributors to Linux and open source gradually changed their goals from "an world run by open source" to "the open web". Others found that messing around with their audio card every six months to play music and the hardships of watching video on Linux were not worth that much. People started moving to OSX.

Many hackers moved to OSX. It was a good looking Unix, with working audio, PDF viewers, working video drivers, codecs for watching movies and at the end of the day, a very pleasant system to use. Many exchanged absolute configurability of their system for a stable system.

As for myself, I had fallen in love with the iPhone, so using a Mac on a day-to-day basis was a must. Having been part of the Linux Desktop efforts, I felt a deep guilt for liking OSX and moving a lot of my work to it.

What we did wrong

Backwards compatibility, and compatibility across Linux distributions is not a sexy problem. It is not even remotely an interesting problem to solve. Nobody wants to do that work, everyone wants to innovate, and be responsible for the next big feature in Linux.

So Linux was left with idealists that wanted to design the best possible system without having to worry about boring details like support and backwards compatibility.

Meanwhile, you can still run the 2001 Photoshop that came when XP was launched on Windows 8. And you can still run your old OSX apps on Mountain Lion.

Back in February I attended FOSDEM and two of my very dear friends were giggling out of excitement at their plans to roll out a new system that will force many apps to be modified to continue running. They have a beautiful vision to solve a problem that I never knew we had, and that no end user probably cares about, but every Linux desktop user will pay the price.

That day I stopped feeling guilty about my new found love for OSX.

Update September 2nd, 2012

Clearly there is some confusion over the title of this blog post, so I wanted to post a quick follow-up.

What I mean with the title is that Linux on the Desktop lost the race for a consumer operating system. It will continue to be a great engineering workstation (that is why I am replacing the hard disk in my system at home) and yes, I am aware that many of my friends use Linux on the desktop and love it.

But we lost the chance of becoming a mainstream consumer OS. What this means is that nobody is recommending a non-technical person go get a computer with Linux on it for their desktop needs (unless you are doing it so for idelogical reasons).

We had our share of chances. The best one was when Vista bombed in the marketplace. But we had our own internal battles and struggles to deal with. Some of you have written your own takes of our struggled in that period.

Today, the various Linux on the desktops are the best they have ever been. Ubuntu and Unity, Fedora and GnomeShell, RHEL and Gnome 2, Debian and Xfce plus the KDE distros. And yet, we still have four major desktop APIs, and about half a dozen popular and slightly incompatible versions of Linux on the desktop: each with its own curated OS subsystems, with different packaging systems, with different dependencies and slightly different versions of the core libraries. Which works great for pure open source, but not so much for proprietary code.

Shipping and maintaining apps for these rapidly evolving platforms is a big challenge.

Linux succeeded in other areas: servers and mobile devices. But on the desktop, our major feature and our major differentiator is price, but comes at the expense of having a timid selection of native apps and frequent breakage. The Linux Hater blog parodied this on a series of posts called the Greatest Hates.

The only way to fix Linux is to take one distro, one set of components as a baseline, abadone everything else and everyone should just contribute to this single Linux. Whether this is Canonical's Ubutu, or Red Hat's Fedora or Debian's system or a new joint effort is something that intelligent people will disagree until the end of the days.

Posted on 29 Aug 2012 by Miguel de Icaza

Mono 2.11.3 is out

This is our fourth preview release of Mono 2.11. This version includes Microsoft's recently open sourced EntityFramework and has been updated to match the latest .NET 4.5 async support.

We are quite happy with over 349 commits spread like this:

 514 files changed, 15553 insertions(+), 3717 deletions(-)

Head over to Mono's Download Page to get the goods.

Posted on 13 Aug 2012 by Miguel de Icaza

Hiring: Documentation Writer and Sysadmin

We are growing our team at Xamarin, and we are looking to hire both a documentation writer and a system administrator.

For the documentation writer position, you should be both familiar with programming and API design and be able to type at least 70 wpms (you can check your own speed at play.typeracer.com). Ideally, you would be based in Boston, but we can make this work remotely.

For the sysadmin position, you would need to be familiar with Unix system administration. Linux, Solaris or MacOS would work and you should feel comfortable with automating tasks. Knowledge of Python, C#, Ruby is a plus. This position is for working in our office in Cambridge, MA.

If you are interested, email me at: miguel at xamarin.

Posted on 11 Aug 2012 by Miguel de Icaza

XNA on Windows 8 Metro

The MonoGame Team has been working on adding Windows 8 Metro support to MonoGame.

This will be of interest to all XNA developers that wanted to target the Metro AppStore, since Microsoft does not plan on supporting XNA on Metro, only on the regular desktop.

The effort is taking place on IRC in the #monogame channel on irc.gnome.org. The code is being worked in the develop3d branch of MonoGame.

Posted on 19 Apr 2012 by Miguel de Icaza

Contributing to Mono 4.5 Support

For a couple of weeks I have been holding off on posting about how to contribute to Mono, since I did not have a good place to point people to.

Gonzalo has just updated our Status pages to include the differences betwee .NET 4.0 to .NET 4.5, these provide a useful roadmap for features that should be added to Mono.

This in particular in the context of ASP.NET 4.5, please join us in mono-devel-list@lists.ximian.com.

Posted on 13 Apr 2012 by Miguel de Icaza

Modest Proposal for C#

This is a trivial change to implement, and would turn what today is an error into useful behavior.

Consider the following C# program:

struct Rect {
	public int X, Y, Width, Height;
}

class Window {
	Rect bounds;

	public Rect Bounds {
		get { return bounds; }
		set {
			// Some code that needs to run when the	property is set
			WindowManager.Invalidate (bounds);
			WindowManager.Invalidate (value);
			bounds = value;
		}
	}
}

Currently, code like this:

Window w = new Window ();
w.Bounds.X = 10;

Produces the error:

Cannot modify the return value of "Window.Bounds.X" because it is not a variable

The reason is that the compiler returns a copy of the "bounds" structure and making changes to the returned value has no effect on the original property.

If we had used a public field for Bounds, instead of a property, the above code would compile, as the compiler knows how to get to the "Bounds.X" field and set its value.

My suggestion is to alter the C# compiler to turn what today is considered an error when accessing properties and doing what the developer expects.

The compiler would rewrite the above code into:

Window w = new Window ();
var tmp = w.Bounds;
tmp.X = 10;
w.Bounds = tmp;

Additionally, it should cluster all of the changes done in a single call, so:

Window w = new Window ();
w.Bounds.X = 10;
w.Bounds.Y = 20;

Will be compiled as:

Window w = new Window ();
var tmp = w.Bounds;
tmp.X = 10;
tmp.Y = 20;
w.Bounds = tmp;

To avoid calling the setter for each property set in the underlying structure.

The change is trivial and wont break any existing code.

Posted on 11 Apr 2012 by Miguel de Icaza

Can JITs be faster?

Herb Sutter discusses in his Reader QA: When Will Better JITs save Managed Code?:

In the meantime, short answer: C++ and managed languages make different fundamental tradeoffs that opt for either performance or productivity when they are in tension.

[...]

This is a 199x/200x meme that’s hard to kill – “just wait for the next generation of (JIT or static) compilers and then managed languages will be as efficient.” Yes, I fully expect C# and Java compilers to keep improving – both JIT and NGEN-like static compilers. But no, they won’t erase the efficiency difference with native code, for two reasons.

First, JIT compilation isn’t the main issue. The root cause is much more fundamental: Managed languages made deliberate design tradeoffs to optimize for programmer productivity even when that was fundamentally in tension with, and at the expense of, performance efficiency. (This is the opposite of C++, which has added a lot of productivity-oriented features like auto and lambdas in the latest standard, but never at the expense of performance efficiency.) In particular, managed languages chose to incur costs even for programs that don’t need or use a given feature; the major examples are assumption/reliance on always-on or default-on garbage collection, a virtual machine runtime, and metadata.

This is a pretty accurate statement on the difference of the mainstream VMs for managed languages (.NET, Java and Javascript).

Designers of managed languages have chosen the path of safety over performance for their designs. For example, accessing elements outside the boundaries of an array is an invalid operation that terminates program execution, as opposed to crashing or creating an exploitable security hole.

But I have an issue with these statements:

Second, even if JIT were the only big issue, a JIT can never be as good as a regular optimizing compiler because a JIT compiler is in the business of being fast, not in the business of generating optimal code. Yes, JITters can target the user’s actual hardware and theoretically take advantage of a specific instruction set and such, but at best that’s a theoretical advantage of NGEN approaches (specifically, installation-time compilation), not JIT, because a JIT has no time to take much advantage of that knowledge, or do much of anything besides translation and code gen.

In general the statement is correct when it comes to early Just-in-Time compilers and perhaps reflects Microsoft's .NET JIT compiler, but this does not apply to state of the art JIT compilers.

Compilers are tools that convert human readable text into machine code. The simplest ones perform straight forward translations from the human readable text into machine code, and typically go through or more of these phases:

Optimizing compilers introduce a series of steps that alter their inputs to ensure that the semantics described by the user are preserved while generating better code:

An optimization that could be performed on the high-level representation would transform the textual "5 * 4" in the source code into the constant 20. This is an easy optimization that can be done up-front. Simple dead code elimination based on constant folding like "if (1 == 2) { ... }" can also be trivially done at this level.

An optimization on the medium representation would analyze the use of variables and could merge subexpressions that are computed more than once, for example:

	int j = (a*b) + (a*b)

Would be transformed by the compiler into:

	int _tmp = a * b;
	int j = _tmp + _tmp;

A low-level optimization would alter a "MULTIPLY REGISTER-1 BY 2" instruction into "SHIFT REGISTER-1 ONE BIT TO THE LEFT".

JIT compilers for Java and .NET essentially break the compilation process in two. They serialize the data in the compiler pipeline and split the process in two. The first part of the process dumps the result into a .dll or .class files:

The second step loads this file and generates the native code. This is similar to purchasing frozen foods from the super market, you unwrap the pie, shove it in the oven and wait 15 minutes:

Saving the intermediate representation and shipping it off to a new system is not a new idea. The TenDRA C and C++ compilers did this. These compilers saved their intermediate representation into an architecture neutral format called ANDF, similar in spirit to .NET's Common Intermediate Language and Java's bytecode. TenDRA used to have an installer program which was essentially a compiler for the target architecture that turned ANDF into native code.

Essentially JIT compilers have the same information than a batch compiler has today. For a JIT compiler, the problem comes down to striking a balance between the quality of the generated code and the time it takes to generate the code.

JIT compilers tend to go for fast compile times over quality of the generated code. Mono allows users to configure this threshold by allowing users to pick the optimization level defaults and even lets them pick LLVM to perform the heavy duty optimizations on the code. Slow, but the generated code quality is the same code quality you get from LLVM with C.

Java HotSpot takes a fascinating approach: they do a quick compilation on the first pass, but if the VM detects that a piece of code is being used a lot, the VM recompiles the code with all the optimization turned on and then they hot-swap the code.

.NET has a precompiler called NGen, and Mono allows the --aot flag to be passed to perform the equivalent process that TenDRA's installer did. They precompile the code tuned for the current hardware architecture to avoid having the JIT compiler spend time at runtime translating .NET CIL code to native code.

In Mono's case, you can use the LLVM optimizing compiler as the backend for precompiling code, which produces great code. This is the same compiler that Apple now uses on Lion and as LLVM improves, Mono's generated code improves.

NGen has a few limitations in the quality of the code that it can produce. Unlike Mono, NGen acts merely as a pre-compiler and tests suggest that there are very limited extra optimizations applied. I believe NGen's limitations are caused by .NET's Code Access Security feature which Mono never implemented [1].

[1] Mono only supports the CoreCLR security system, but that is an opt-in feature that is not enabled for desktop/server/mobile use. A special set of assemblies are shipped to support this.

Optimizing JIT compilation for Managed Languages

Java, JavaScript and .NET have chosen a path of productivity and safety over raw performance.

This means that they provide automatic memory management, arrays bounds checking and resource tracking. Those are really the elements that affect the raw performance of these languages.

There are several areas in which managed runtimes can evolve to improve their performance. They wont ever match the performance of hand-written assembly language code, but here are some areas that managed runtimes can work on to improve performance:

>Alias analysis is simpler as arrays are accessed with array operations instead of pointer arithmetic.

Intent: with the introduction of LINQ in C#, developers can shift their attention from how a particular task is done to expressing the desired outcome of an operation. For example:

var biggerThan10 = new List;
for (int i = 0; i < array.Length; i++){
    if (array [i] > 10)
       biggerThan10.Add (i);
}	
	

Can be expressed now as:

var biggerThan10 = x.Where (x => x > 10).Select (x=>x);
	
// with LINQ syntax:
var biggerThan10 = from x in array where x > 10 select x;

Both managed compilers and JIT compilers can take advantage of the rich information that is preserved to turn the expressed intent into an optimized version of the code.

Extend VMs: Just like Javascript was recently extended to support strongly typed arrays to improve performance, both .NET and Java can be extended to allow fewer features to be supported at the expense of safety.

.NET could allow developers to run without the CAS sandbox and without AppDomains (like Mono does).

Both .NET and Java could offer "unsafe" sections that would allow performance critical code to not enforce arrays-bounds-optimization (at the expense of crashing or creating a security gap, this can be done today in Mono by using -O=unsafe).

.NET and Mono could provide allocation primitives that allocate objects on a particular heap or memory pool:

	var pool = MemoryPool.Allocate (1024*1024);

	// Allocate the TerrainMesh in the specified memory pool
	var p = new pool, TerrainMesh ();

	[...]
	
	// Release all objects from the pool, all references are
	// nulled out
	//
	Assert.NotEquals (p, null);
	pool.Destroy ();
	Assert.Equals (p, null);
	

Limiting Dynamic Features: Current JIT compilers for Java and .NET have to deal with the fact that code can be extended dynamically by either loading code at runtime or generating code dynamically.

HotSpot leverages its code recompiled to implement sophisticated techniques to perform devirtualization safely.

On iOS and other platforms it is not possible to generate code dynamically, so code generators could trivially devirtualize, inline certain operations and drop features from both their runtimes and the generated code.

More Intrinsics: An easy optimization that JIT engines can do is map common constructs into native features. For example, we recently inlined the use of ThreadLocal<T> variables. Many Math.* methods can be inlined, and we applied this technique to Mono.SIMD.

Posted on 04 Apr 2012 by Miguel de Icaza

Microsoft's new Open Sourced Stacks

Yesterday Microsoft announced that another component of .NET would be open sourced. The entire ASP.NET MVC stack is now open source, including the Razor Engine, System.Json, Web API and WebPages.

With this release, they will start accepting external contributions to these products and will be running the project like other open source projects are.

Mono and the new Stacks

We imported a copy of the git tree from Codeplex into GitHub's Mono organization in the aspnetwebstack module.

The mono module itself has now taken a dependency on this module, so the next time that you run autogen.sh in Mono, you will get a copy of the aspnetwebstack inside Mono.

As of today, we replaced our System.Json implementation (which was originally built for Moonlight) and replaced it with Microsoft's implementation.

Other libraries like Razor are next, as those are trivially imported into Mono. But ASP.NET MVC 4 itself will have to wait since it depends on extending our own core ASP.NET stack to add asynchronous support.

Our github copy will contain mostly changes to integrate the stack with Mono. If there are any changes worth integrating upstream, we will submit the code directly to Microsoft for inclusion. If you want to experiment with ASP.NET Web Stack, you should do this with your own work and work directly with the upstream maintainers.

Extending Mono's ASP.NET Engine

The new ASP.NET engine has been upgraded to support C# 5.0 asynchronous programming and this change will require a number of changes to the core ASP.NET.

We currently are not aware of anyone working on extending our ASP.NET core engine to add these features, but those of us in the Mono world would love to assist enthusiastic new developers of people that love async programming to bring these features to Mono.

Posted on 28 Mar 2012 by Miguel de Icaza

Mono 2.11.0 is out

After more than a year of development, we are happy to announce Mono 2.11, the first in a series of beta releases that will lead to the next 2.12 stable release.

Continuous Integration

To assist those helping us with testing the release, we have setup a new continuous build system that builds packages for Mac, OpenSUSE and Windows at http://wrench.mono-project.com/Wrench.

Packages

To test drive Mono 2.11 head to our our downloads page and select the "Alpha" section of the page to get the packages for Mac, Windows or Linux.

The Linux version is split up in multiple packages.

The Windows version ships with Gtk+ and Gtk#

The Mac version ships with Gtk+, Gtk#, F#, IronPython and IronRuby and comes in two versions: Mono Runtime Environment (MRE) and the more complete Mono Development Kit (MDK).

At this stage, we recommend that users get the complete kit.

Runtime Improvements in Mono 2.11

There are hundreds of new features available in this release as we have accumulated them over a very long time. Every fix that has gone into the Mono 2.10.xx series has been integrated into this release.

In addition, here are some of the highlights of this release.

Garbage Collector: Our SGen garbage collector is now considered production quality and is in use by Xamarin's own commercial products.

The collector on multi-CPU systems will also distribute various tasks across the CPUs, it is no longer limited to the marking phase.

The guide Working with SGen will help developers tune the collector for their needs and discusses tricks that developers can take advantage of.

ThreadLocal<T> is now inlined by the runtime engine, speeding up many threaded applications.

Full Unicode Surrogate Support this was a long standing feature and has now been implemented.

C# 5.0 -- Async Support

Mono 2.11 implements the C# 5.0 language with complete support for async programming.

The Mono's class libraries have been updated to better support async programming. See the section "4.5 API" for more details.

C# Backend Rewrite

The compiler code generation backend was rewritten entirely to support both IKVM.Reflection and System.Reflection which allowed us to unify all the old compilers (mcs, gmcs, dmcs and smcs) into a single compiler: mcs. For more information see Backend Rewrite.

The new IKVM.Reflection backend allows the compiler to consume any mscorlib.dll library, instead of being limited to the ones that were custom built/crafted for Mono.

In addition, the compiler is no longer a big set of static classes, instead the entire compiler is instance based, allowing multiple instances of the compiler to co-exist at the same time.

Compiler as a Service

Mono's Compiler as a Service has been extended significantly and reuses the compiler's fully instance based approach (see Instance API for more details).

Mono's compiler as a service is still a low-level API to the C# compiler. The NRefactory2 framework --shared by SharpDevelop and MonoDevelop-- provides a higher level abstraction that can be -- used by IDEs and other high-level tools.

C# Shell

Our C# interactive shell and our C# API to compile C# code can in addition to compiling expressions and statements can now compile class definitions.

4.5 API

4.5 Profile Mono now defaults to the 4.5 profile which is a strict superset of the 4.0 profile and reuses the same version number for the assemblies.

Although .NET 4.5 has not yet been officially released, the compiler now defaults to the 4.5 API, if you want to use different profile API you must use the -sdk:XXX switch to the command line compiler.

Because 4.5 API is a strict superset of 4.0 API they both share the same assembly version number, so we actually install the 4.5 library into the GAC.

Some of the changes in the 4.5 API family include:

  • New Async methods
  • WinRT compatibility API
  • Newly introduced assemblies: System.Net.Http, System.Threading.Tasks.Dataflow

The new System.Net.Http stack is ideal for developers using the C# 5.0 async framework.

Debugging

The GDB support has been extended and can pretty print more internal variables of Mono as well as understanding SGen internals.

The soft debugger has seen a large set of improvements:

  • Single stepping is now implemented using breakpoints in most cases, speeding it up considerably.
  • Calls to System.Diagnostics.Debugger:Log()/Break () are now routed to the debugger using new UserLog/UserBreak event types.
  • S390x is now supported (Neale Ferguson).
  • MIPS is now supported.
  • Added new methods to Mono.Debugger.Soft and the runtime to decrease the amount of packets transmitted between the debugger and the debuggee. This significantly improves performance over high latency connections like USB.

Mac Support

Mac support has been vastly extended, from faster GC by using native Mach primitives to improves many features that previously only worked on Linux to extending the asynchronous socket support in Mono to use MacOS X specific primitives.

New Ports

We have completed the Mono MIPS port.

Performance

As a general theme, Mono 2.11 has hundreds of performance improvements in many small places which add up.

Posted on 22 Mar 2012 by Miguel de Icaza

Mono and Google Summer of Code

Students, get your pencils ready for an intense summer of hacking with the Google Summer of Code and Mono!

Check out the Mono organization Summer of Code Project site.

Posted on 16 Mar 2012 by Miguel de Icaza

Cross Platform Game Development in C#

If you missed the live session on Cross Platform Game Development in C# from AltDevConf you can now watch presentation.

You can also check the videos for all the AltDevConf presentations.

Posted on 16 Mar 2012 by Miguel de Icaza

Working With SGen

As SGen becomes the preferred garbage collector for Mono, I put together the Working With SGen document. This document is intended to explain the options that as a developer you can tune in SGen as well as some practices that you can adopt in your application to improve your application performance.

This document is a complement to the low-level implementation details that we had previously posted.

Posted on 05 Mar 2012 by Miguel de Icaza
« Newer entries | Older entries »
This is a personal web page. Things said here do not represent the position of my employer.