Modest Proposal for C#

This is a trivial change to implement, and would turn what today is an error into useful behavior.

Consider the following C# program:

struct Rect {
	public int X, Y, Width, Height;
}

class Window {
	Rect bounds;

	public Rect Bounds {
		get { return bounds; }
		set {
			// Some code that needs to run when the	property is set
			WindowManager.Invalidate (bounds);
			WindowManager.Invalidate (value);
			bounds = value;
		}
	}
}

Currently, code like this:

Window w = new Window ();
w.Bounds.X = 10;

Produces the error:

Cannot modify the return value of "Window.Bounds.X" because it is not a variable

The reason is that the compiler returns a copy of the "bounds" structure and making changes to the returned value has no effect on the original property.

If we had used a public field for Bounds, instead of a property, the above code would compile, as the compiler knows how to get to the "Bounds.X" field and set its value.

My suggestion is to alter the C# compiler to turn what today is considered an error when accessing properties and doing what the developer expects.

The compiler would rewrite the above code into:

Window w = new Window ();
var tmp = w.Bounds;
tmp.X = 10;
w.Bounds = tmp;

Additionally, it should cluster all of the changes done in a single call, so:

Window w = new Window ();
w.Bounds.X = 10;
w.Bounds.Y = 20;

Will be compiled as:

Window w = new Window ();
var tmp = w.Bounds;
tmp.X = 10;
tmp.Y = 20;
w.Bounds = tmp;

To avoid calling the setter for each property set in the underlying structure.

The change is trivial and wont break any existing code.

Posted on 11 Apr 2012 by Miguel de Icaza

Can JITs be faster?

Herb Sutter discusses in his Reader QA: When Will Better JITs save Managed Code?:

In the meantime, short answer: C++ and managed languages make different fundamental tradeoffs that opt for either performance or productivity when they are in tension.

[...]

This is a 199x/200x meme that’s hard to kill – “just wait for the next generation of (JIT or static) compilers and then managed languages will be as efficient.” Yes, I fully expect C# and Java compilers to keep improving – both JIT and NGEN-like static compilers. But no, they won’t erase the efficiency difference with native code, for two reasons.

First, JIT compilation isn’t the main issue. The root cause is much more fundamental: Managed languages made deliberate design tradeoffs to optimize for programmer productivity even when that was fundamentally in tension with, and at the expense of, performance efficiency. (This is the opposite of C++, which has added a lot of productivity-oriented features like auto and lambdas in the latest standard, but never at the expense of performance efficiency.) In particular, managed languages chose to incur costs even for programs that don’t need or use a given feature; the major examples are assumption/reliance on always-on or default-on garbage collection, a virtual machine runtime, and metadata.

This is a pretty accurate statement on the difference of the mainstream VMs for managed languages (.NET, Java and Javascript).

Designers of managed languages have chosen the path of safety over performance for their designs. For example, accessing elements outside the boundaries of an array is an invalid operation that terminates program execution, as opposed to crashing or creating an exploitable security hole.

But I have an issue with these statements:

Second, even if JIT were the only big issue, a JIT can never be as good as a regular optimizing compiler because a JIT compiler is in the business of being fast, not in the business of generating optimal code. Yes, JITters can target the user’s actual hardware and theoretically take advantage of a specific instruction set and such, but at best that’s a theoretical advantage of NGEN approaches (specifically, installation-time compilation), not JIT, because a JIT has no time to take much advantage of that knowledge, or do much of anything besides translation and code gen.

In general the statement is correct when it comes to early Just-in-Time compilers and perhaps reflects Microsoft's .NET JIT compiler, but this does not apply to state of the art JIT compilers.

Compilers are tools that convert human readable text into machine code. The simplest ones perform straight forward translations from the human readable text into machine code, and typically go through or more of these phases:

Optimizing compilers introduce a series of steps that alter their inputs to ensure that the semantics described by the user are preserved while generating better code:

An optimization that could be performed on the high-level representation would transform the textual "5 * 4" in the source code into the constant 20. This is an easy optimization that can be done up-front. Simple dead code elimination based on constant folding like "if (1 == 2) { ... }" can also be trivially done at this level.

An optimization on the medium representation would analyze the use of variables and could merge subexpressions that are computed more than once, for example:

	int j = (a*b) + (a*b)

Would be transformed by the compiler into:

	int _tmp = a * b;
	int j = _tmp + _tmp;

A low-level optimization would alter a "MULTIPLY REGISTER-1 BY 2" instruction into "SHIFT REGISTER-1 ONE BIT TO THE LEFT".

JIT compilers for Java and .NET essentially break the compilation process in two. They serialize the data in the compiler pipeline and split the process in two. The first part of the process dumps the result into a .dll or .class files:

The second step loads this file and generates the native code. This is similar to purchasing frozen foods from the super market, you unwrap the pie, shove it in the oven and wait 15 minutes:

Saving the intermediate representation and shipping it off to a new system is not a new idea. The TenDRA C and C++ compilers did this. These compilers saved their intermediate representation into an architecture neutral format called ANDF, similar in spirit to .NET's Common Intermediate Language and Java's bytecode. TenDRA used to have an installer program which was essentially a compiler for the target architecture that turned ANDF into native code.

Essentially JIT compilers have the same information than a batch compiler has today. For a JIT compiler, the problem comes down to striking a balance between the quality of the generated code and the time it takes to generate the code.

JIT compilers tend to go for fast compile times over quality of the generated code. Mono allows users to configure this threshold by allowing users to pick the optimization level defaults and even lets them pick LLVM to perform the heavy duty optimizations on the code. Slow, but the generated code quality is the same code quality you get from LLVM with C.

Java HotSpot takes a fascinating approach: they do a quick compilation on the first pass, but if the VM detects that a piece of code is being used a lot, the VM recompiles the code with all the optimization turned on and then they hot-swap the code.

.NET has a precompiler called NGen, and Mono allows the --aot flag to be passed to perform the equivalent process that TenDRA's installer did. They precompile the code tuned for the current hardware architecture to avoid having the JIT compiler spend time at runtime translating .NET CIL code to native code.

In Mono's case, you can use the LLVM optimizing compiler as the backend for precompiling code, which produces great code. This is the same compiler that Apple now uses on Lion and as LLVM improves, Mono's generated code improves.

NGen has a few limitations in the quality of the code that it can produce. Unlike Mono, NGen acts merely as a pre-compiler and tests suggest that there are very limited extra optimizations applied. I believe NGen's limitations are caused by .NET's Code Access Security feature which Mono never implemented [1].

[1] Mono only supports the CoreCLR security system, but that is an opt-in feature that is not enabled for desktop/server/mobile use. A special set of assemblies are shipped to support this.

Optimizing JIT compilation for Managed Languages

Java, JavaScript and .NET have chosen a path of productivity and safety over raw performance.

This means that they provide automatic memory management, arrays bounds checking and resource tracking. Those are really the elements that affect the raw performance of these languages.

There are several areas in which managed runtimes can evolve to improve their performance. They wont ever match the performance of hand-written assembly language code, but here are some areas that managed runtimes can work on to improve performance:

>Alias analysis is simpler as arrays are accessed with array operations instead of pointer arithmetic.

Intent: with the introduction of LINQ in C#, developers can shift their attention from how a particular task is done to expressing the desired outcome of an operation. For example:

var biggerThan10 = new List;
for (int i = 0; i < array.Length; i++){
    if (array [i] > 10)
       biggerThan10.Add (i);
}	
	

Can be expressed now as:

var biggerThan10 = x.Where (x => x > 10).Select (x=>x);
	
// with LINQ syntax:
var biggerThan10 = from x in array where x > 10 select x;

Both managed compilers and JIT compilers can take advantage of the rich information that is preserved to turn the expressed intent into an optimized version of the code.

Extend VMs: Just like Javascript was recently extended to support strongly typed arrays to improve performance, both .NET and Java can be extended to allow fewer features to be supported at the expense of safety.

.NET could allow developers to run without the CAS sandbox and without AppDomains (like Mono does).

Both .NET and Java could offer "unsafe" sections that would allow performance critical code to not enforce arrays-bounds-optimization (at the expense of crashing or creating a security gap, this can be done today in Mono by using -O=unsafe).

.NET and Mono could provide allocation primitives that allocate objects on a particular heap or memory pool:

	var pool = MemoryPool.Allocate (1024*1024);

	// Allocate the TerrainMesh in the specified memory pool
	var p = new pool, TerrainMesh ();

	[...]
	
	// Release all objects from the pool, all references are
	// nulled out
	//
	Assert.NotEquals (p, null);
	pool.Destroy ();
	Assert.Equals (p, null);
	

Limiting Dynamic Features: Current JIT compilers for Java and .NET have to deal with the fact that code can be extended dynamically by either loading code at runtime or generating code dynamically.

HotSpot leverages its code recompiled to implement sophisticated techniques to perform devirtualization safely.

On iOS and other platforms it is not possible to generate code dynamically, so code generators could trivially devirtualize, inline certain operations and drop features from both their runtimes and the generated code.

More Intrinsics: An easy optimization that JIT engines can do is map common constructs into native features. For example, we recently inlined the use of ThreadLocal<T> variables. Many Math.* methods can be inlined, and we applied this technique to Mono.SIMD.

Posted on 04 Apr 2012 by Miguel de Icaza

Microsoft's new Open Sourced Stacks

Yesterday Microsoft announced that another component of .NET would be open sourced. The entire ASP.NET MVC stack is now open source, including the Razor Engine, System.Json, Web API and WebPages.

With this release, they will start accepting external contributions to these products and will be running the project like other open source projects are.

Mono and the new Stacks

We imported a copy of the git tree from Codeplex into GitHub's Mono organization in the aspnetwebstack module.

The mono module itself has now taken a dependency on this module, so the next time that you run autogen.sh in Mono, you will get a copy of the aspnetwebstack inside Mono.

As of today, we replaced our System.Json implementation (which was originally built for Moonlight) and replaced it with Microsoft's implementation.

Other libraries like Razor are next, as those are trivially imported into Mono. But ASP.NET MVC 4 itself will have to wait since it depends on extending our own core ASP.NET stack to add asynchronous support.

Our github copy will contain mostly changes to integrate the stack with Mono. If there are any changes worth integrating upstream, we will submit the code directly to Microsoft for inclusion. If you want to experiment with ASP.NET Web Stack, you should do this with your own work and work directly with the upstream maintainers.

Extending Mono's ASP.NET Engine

The new ASP.NET engine has been upgraded to support C# 5.0 asynchronous programming and this change will require a number of changes to the core ASP.NET.

We currently are not aware of anyone working on extending our ASP.NET core engine to add these features, but those of us in the Mono world would love to assist enthusiastic new developers of people that love async programming to bring these features to Mono.

Posted on 28 Mar 2012 by Miguel de Icaza

Mono 2.11.0 is out

After more than a year of development, we are happy to announce Mono 2.11, the first in a series of beta releases that will lead to the next 2.12 stable release.

Continuous Integration

To assist those helping us with testing the release, we have setup a new continuous build system that builds packages for Mac, OpenSUSE and Windows at http://wrench.mono-project.com/Wrench.

Packages

To test drive Mono 2.11 head to our our downloads page and select the "Alpha" section of the page to get the packages for Mac, Windows or Linux.

The Linux version is split up in multiple packages.

The Windows version ships with Gtk+ and Gtk#

The Mac version ships with Gtk+, Gtk#, F#, IronPython and IronRuby and comes in two versions: Mono Runtime Environment (MRE) and the more complete Mono Development Kit (MDK).

At this stage, we recommend that users get the complete kit.

Runtime Improvements in Mono 2.11

There are hundreds of new features available in this release as we have accumulated them over a very long time. Every fix that has gone into the Mono 2.10.xx series has been integrated into this release.

In addition, here are some of the highlights of this release.

Garbage Collector: Our SGen garbage collector is now considered production quality and is in use by Xamarin's own commercial products.

The collector on multi-CPU systems will also distribute various tasks across the CPUs, it is no longer limited to the marking phase.

The guide Working with SGen will help developers tune the collector for their needs and discusses tricks that developers can take advantage of.

ThreadLocal<T> is now inlined by the runtime engine, speeding up many threaded applications.

Full Unicode Surrogate Support this was a long standing feature and has now been implemented.

C# 5.0 -- Async Support

Mono 2.11 implements the C# 5.0 language with complete support for async programming.

The Mono's class libraries have been updated to better support async programming. See the section "4.5 API" for more details.

C# Backend Rewrite

The compiler code generation backend was rewritten entirely to support both IKVM.Reflection and System.Reflection which allowed us to unify all the old compilers (mcs, gmcs, dmcs and smcs) into a single compiler: mcs. For more information see Backend Rewrite.

The new IKVM.Reflection backend allows the compiler to consume any mscorlib.dll library, instead of being limited to the ones that were custom built/crafted for Mono.

In addition, the compiler is no longer a big set of static classes, instead the entire compiler is instance based, allowing multiple instances of the compiler to co-exist at the same time.

Compiler as a Service

Mono's Compiler as a Service has been extended significantly and reuses the compiler's fully instance based approach (see Instance API for more details).

Mono's compiler as a service is still a low-level API to the C# compiler. The NRefactory2 framework --shared by SharpDevelop and MonoDevelop-- provides a higher level abstraction that can be -- used by IDEs and other high-level tools.

C# Shell

Our C# interactive shell and our C# API to compile C# code can in addition to compiling expressions and statements can now compile class definitions.

4.5 API

4.5 Profile Mono now defaults to the 4.5 profile which is a strict superset of the 4.0 profile and reuses the same version number for the assemblies.

Although .NET 4.5 has not yet been officially released, the compiler now defaults to the 4.5 API, if you want to use different profile API you must use the -sdk:XXX switch to the command line compiler.

Because 4.5 API is a strict superset of 4.0 API they both share the same assembly version number, so we actually install the 4.5 library into the GAC.

Some of the changes in the 4.5 API family include:

  • New Async methods
  • WinRT compatibility API
  • Newly introduced assemblies: System.Net.Http, System.Threading.Tasks.Dataflow

The new System.Net.Http stack is ideal for developers using the C# 5.0 async framework.

Debugging

The GDB support has been extended and can pretty print more internal variables of Mono as well as understanding SGen internals.

The soft debugger has seen a large set of improvements:

  • Single stepping is now implemented using breakpoints in most cases, speeding it up considerably.
  • Calls to System.Diagnostics.Debugger:Log()/Break () are now routed to the debugger using new UserLog/UserBreak event types.
  • S390x is now supported (Neale Ferguson).
  • MIPS is now supported.
  • Added new methods to Mono.Debugger.Soft and the runtime to decrease the amount of packets transmitted between the debugger and the debuggee. This significantly improves performance over high latency connections like USB.

Mac Support

Mac support has been vastly extended, from faster GC by using native Mach primitives to improves many features that previously only worked on Linux to extending the asynchronous socket support in Mono to use MacOS X specific primitives.

New Ports

We have completed the Mono MIPS port.

Performance

As a general theme, Mono 2.11 has hundreds of performance improvements in many small places which add up.

Posted on 22 Mar 2012 by Miguel de Icaza

Mono and Google Summer of Code

Students, get your pencils ready for an intense summer of hacking with the Google Summer of Code and Mono!

Check out the Mono organization Summer of Code Project site.

Posted on 16 Mar 2012 by Miguel de Icaza

Cross Platform Game Development in C#

If you missed the live session on Cross Platform Game Development in C# from AltDevConf you can now watch presentation.

You can also check the videos for all the AltDevConf presentations.

Posted on 16 Mar 2012 by Miguel de Icaza

Working With SGen

As SGen becomes the preferred garbage collector for Mono, I put together the Working With SGen document. This document is intended to explain the options that as a developer you can tune in SGen as well as some practices that you can adopt in your application to improve your application performance.

This document is a complement to the low-level implementation details that we had previously posted.

Posted on 05 Mar 2012 by Miguel de Icaza

Gtk+ and MacOS X

We have released a new build of Mono 2.10.9 (Beta) with the latest version of Gtk+2 containing dozens of bug fixes done by the Lanedo team to improve the quality of Mono/Gtk+ apps on OSX.

This is still a beta release, please take it out for a spin, we are almost ready to graduate this as our stable Mono package.

Posted on 05 Mar 2012 by Miguel de Icaza

Phalanger's PHP on Mono/.NET Updates

The Phalanger developers have published an updated set of benchmarks of their PHP compiler running on top of .NET vs PHP and Cached PHP, and the results are impressive:

There are two cases on the language shootout where they are slower than PHP (out of eighteen cases) and they are also slower on eight of thirtyone microbenchmarks.

But in general with real applications like WordPress and MediaWiki, the performance gains are impressive.

Posted on 05 Mar 2012 by Miguel de Icaza

C# for Gaming: Slides

You can now get the Slides for my Mono for Game Development talk.

Or you can go straight to the resources for the talk.

Posted on 11 Feb 2012 by Miguel de Icaza

C# for Gaming: AltDevConf This Weekend

It is a great honor to participate this weekend on the online AltDevConf conference. This is an online two-day event

Our goal is twofold: To provide free access to a comprehensive selection of game development topics taught by leading industry experts, and to create a space where bright and innovative voices can also be heard. We are able to do this, because as an online conference we are not subject to the same logistic and economic constrains imposed by the traditional conference model.

I will be participating in the talk on Cross Platform Game Development using C# with Matthieu Laban and Philippe Rollin.

You can register here for our session on Saturday at 3pm Eastern Standard Time, noon Pacific Time, and 9pm Paris Time.

If you are located in the Paris time zone, that means that you get to enjoy the talk sipping a tasty hot chocolate with some tasty baguettes.

Posted on 09 Feb 2012 by Miguel de Icaza

Mono in 2011

This was a very interesting year for Mono, and I wanted to capture some of the major milestones and news from the project as well as sharing a bit of what is coming up for Mono in 2012.

I used to be able to list all of the major applications and great projects built with Mono. The user base has grown so large that I am no longer able to do this. 2011 was a year that showed an explosion of applications built with Mono.

In this post I list a few of the high profile projects, but it is by no means an extensive list. There are too many great products and amazing technologies being built with Mono, but a comprehensive list would take too long to assemble.

Xamarin

The largest event for Mono this year was that the team working on Mono technologies at Novell was laid off after Novell was acquired.

We got back on our feet, and two weeks after the layoffs had taken place, the original Mono team incorporated as Xamarin.

Xamarin's goal is to deliver great productivity and great tools for mobile developers. Our main products are Mono on iOS and Mono on Android.

These products are built on top of the open source Mono project and the MonoDevelop project. We continue to contribute extensively to these two open source projects.

Launching Xamarin was a huge effort for all of us.

Xamarin would not have been possible without our great customers and friends in the industry. Many people cared deeply about the technology and helped us get up and running.

In July, we announced an agreement with Attachmate that ensured a bright future for our young company.

A couple of days later, we were ready to sell the mobile products that had been previously developed at Novell, and we started to provide all existing Novell customers with ongoing support for their Mono-based products.

Half a year later, we grew the company and continued to do what we like the most: writing amazing software.

Meanwhile, our users have created amazing mobile applications. You can see some of those in our App Catalog.

C# Everywhere

On the Mobile Space: This year Sony jumped to C# in a big way with the introduction of PS Suite (see the section below) and Nokia adopted Windows Phone 7 as their new operating system.

And we got you covered on Android and iOS for all of your C# needs.

On the Browser: we worked with Google to bring you Mono to Native Client. In fact, every demo shown at the Google Native Client event on December 8th was powered by Mono.

On the Desktop: this year we added MacOS X as a first-class citizen in the world of supported Mono platforms. We did this by introducing MonoMac 1.0 and supporting Apple's MacStore with it.

Games: continue to take advantage of C# blend of performance and high-level features. Read more on my GDC 2011 post.

It is a wild new world for C# and .NET developers that were used to build their UI using ASP.NET or Winforms only. It has been fascinating to see developers evolve their thinking from a Microsoft-only view of the world to a world where they design libraries and applications that split the presentation layer from the business logic.

Developers that make this transition will be able to get great native experiences on each device and form factor.

Sony PSSuite - Powered by Mono

At GDC, Sony announced that PS Suite was built on top of Mono. PS Suite is a new development stack for cross-platform games and cross-platform applications to run on Android devices and Sony Vita.

The PS Suite presentation is available in this video.

In particular, watch the game in Video 2 to get a feeling for the speed of a 3D game purely written in managed code (no native code):

Some of the juicy details from the GDC announcement:

  • PS Suite will have an open appstore model, different than the traditional game publishing business.
  • Open SDK, available for everyone at launch time.
  • PS Suite supports both game development with Sony's 3D libraries as well as regular app development.
  • Cross-platform, cross-device, using the ECMA Common Intermediate Language.
  • Code in C#, run using Mono.
  • GUI Designer called "UI Composer" for non-game applications.
  • The IDE is based on MonoDevelop.
  • Windows-simulator is included to try things out quickly.

MonoDevelop on PSSuite:

PS Suite comes with a GUI Toolkit and this is what the UI composer looks like:

Google Native Client

Google Engineers ported Mono to run on the sandboxed environment of Native Client. Last year they had added support for Mono code generator to output code for Native Client using Mono's static compiler.

This year Google extended Native Client to support Just in Time Compilation, in particular, Mono's brand of JIT compilation. This was used by all three demos shown at the Google Native Client event a couple of days ago:

Unity Powered Builder

This is another game built with Unity's Native Client code generator:

To get the latest version of Mono with support for Native Client, download and build Mono from Google's branch on github.

Mono 2.10

This was the year of Mono 2.10. We went from a beta release for Mono 2.10 in January to making it our new stable release for Mono.

While the world is on Mono 2.10, we have started our work to get Mono 2.12 out in beta form in January.

Mono on Android

This year we launched Mono for Android, a product that consists of port of Mono to the Android OS, C# bindings to the native Java APIs and IDE support for both MonoDevelop and Visual Studio.

The first release came out in April, it was rough around the edges, but thanks to the amazing community of users that worked with us during the year, we solved the performance problems, the slow debugging, vastly improved the edit/debug/deploy cycle and managed to catch up to Google's latest APIs with the introduction of Mono for Android 4.0.

Mono on iOS

Just like Android, we have been on a roll with MonoTouch.

In short, this year:

  • We kept up with Apple's newly introduced APIs (UIKit, iCloud, Airplay, Bluetooth, Newstand, CoreImage).
  • Integrated XCode 4's UI designer with MonoDevelop< and added support for storyboards.
  • Added the option of using LLVM for our builds, bringing thumb support and ARMv7 support along the way.

We started beta-testing a whole new set of features to be released early next year: a new unit testing framework, a heap profiler, integrating MonoTouch.Dialog in the product and improving the debug/deploy process.<

Mono for iOS has been on the market now for two years, and many products are coming to the market based on it.

Phalanger

Phalanger is a PHP compiler that runs on the .NET and Mono VMs and is powered by the Dynamic Language Runtime.

It is so complete that it can run both MediaWiki and WordPress out of the box. And does so by running faster than they would under PHP.

This year the Phalanger guys released Phalanger 3.0 which now runs on Mono (previously they required the C++/CLI compiler to run).

Phalanger's performance is impressive as it is just as fast as the newly announced Facebook HipHop VM for PHP. The major difference being that Phalanger is a complete PHP implementation and the HipHopVM is still not a complete implementation.

The other benefit of Phalanger is that it is able to participate and interop with code written in other .NET languages as well as benefitting from the existing .NET interop story (C, C++).

CXXI

Our technology to bridge C# and C++ matured to the point that it can be used by regular users.

Compiler as a Service

This year our C# compiler was expanded in three directions:

  • We completed async/await support
  • We completed the two code output engines (System.Reflection.Emit and IKVM.Reflection).
  • We improved the compiler-as-a-service features of the compiler.

Our async/await support is scheduled to go out with the first preview of Mono 2.11 in early January. We can not wait to get this functionality to our users and start building a new generation of async-friendly/ready desktop, mobile and server apps.

One major difference between our compiler-as-a-service and Microsoft's version of the C# compiler as a service is that we support two code generation engines, one generates complete assemblies (like Microsoft does) and the other one is able to be integrated with running code (this is possible because we use System.Reflection.Emit and we can reference static or dynamic code from the running process).

We have also been improving the error recovery components of the compiler as this is going to power our new intellisense/code completion engine in MonoDevelop. Mono's C# compiler is the engine that is powering the upcoming NRefactory2 library.

You can read more about our compiler as a service updates.

Unity3D

Unity is one of Mono's major users. At this point Unity no longer requires an introduction, they went from independent game engine a few years ago to be one of the major game engine platforms in the game industry this year.

The Unity engine runs on every platform under the sun. From the Consoles (PS3, Wii and XBox360) to iPhones and Androids and runs on your desktop either with the Unity3D plugin or using Google's Native Client technology. The list of games being built with Unity keeps growing every day and they are consistently among the top sellers on every app store.

Mono is the engine that powers the scripts and custom code in games and applications built with Unity3D and it also powers the actual tool that users use to build games, the Unity3D editor:

The editor itself it implemented in terms of Unity primitives, and users can extend the Unity3D editor with C#, UnityScript or Boo scripts dynamically.

One of my favorite games built with Unity3D is Rochard was demoed earlier this year on a PS3 at the GDC and is now also avaialble on Steam:

Microsoft

Just before the end of the year, Microsoft shipped Kinectimals for iOS systems.

Kinectimals is built using Unity and this marks the first time that Microsoft ships a software product built with Mono.

Then again, this year has been an interesting year for Microsoft, as they have embraced open source technologies for Azure, released SDKs for iOS and Android at the same time they ship SDKs for their own platforms and shipped various applications on Apple's AppStore for iOS.

MonoDevelop

We started the year with MonoDevelop 2.4 and we finished after two major releases with MonoDevelop 2.8.5.

In the course of the year, we added:

  • Native Git support
  • Added .NET 4.0 project support, upgraded where possible to XBuild/MSBuild
  • MonoMac Projects
  • XCode 4 support for MonoMac, MonoTouch and Storyboards
  • Support for Android development
  • Support for iOS5 style properties
  • Major upgrade to the debugger engine
  • Adopted native dialogs on OSX and Windows

Our Git support was based on a machine assisted translation of the Java jGit library using Sharpen. Sharpen has proved to be an incredibly useful tool to bring Java code to the .NET world.

SGen

Our precise collector has gotten a full year of testing now. With Mono 2.10 we made it very easy for developers to try it out. All users had to do was run their programs with the --sgen flag, or set MONO_ENV_OPTIONS to gc=sgen.

Some of the new features in our new Garbage Collector include:

  • Windows, MacOS X and S390x ports of SGen (in addition to the existing x86, x86-64 and ARM ports).
  • Lock-free allocation to improve scalability (we only take locks when we run out of memory).
  • Work stealing parallel collector and a parallel nursery collector, to take advantage of extra CPUs on the system to help with the GC.
  • Work on performance and scalability work, as our users tried things out in the field, we identified hot-spots in SGen which we have been addressing.

As we are spending so much time on ARM-land these days, SGen has also gained various ARM-specific optimizations.

SGen was designed primarly to be used by Mono and we are extending it beyond being a pure garbage collector for Mono, to support scenarios where our garbage collector has to be integrated with other object systems and garbage collectors. This is the case of Mono for Android where we now have a cooperative garbage collector that works hand-in-hand with Dalvik's GC. And we also introduce support for toggle references to better support Objective-C environments like MonoTouch and MonoMac.

XNA and Mono: MonoGame

Ever since Microsoft published the XNA APIs for .NET, developers have been interested in bringing XNA to Mono-based platforms.

There was a MonoXNA project, which was later reused by projects like SilverXNA (an XNA implementation for Silverlight) and later XNAtouch an implementation of XNA for the iPhone powered by MonoTouch. Both very narrow projects focused on single platforms.

This year, the community got together and turned the single platform XNATouch into a full cross-platform framework, the result is the MonoGame project:

Platform Support Matrix

Currently MonoGame's strength is on building 2D games. They already have an extensive list of games that have been published on the iOS AppStore and the Mac AppStore and they were recently featured in Channel 9's Coding For Fun: MonoGame Write Once Play Everywhere.

An early version of MonoGame/XnaTouch powers SuperGiantGame's Bastion game on Google's Native Client. Which allows users of Windows, Mac and Linux desktop systems to run the same executable on all systems. If you are running Chrome, you can install it in seconds.

Incidentally, Bastion just won three awards at the Spike VGA awards including Best Downloadable Game, Best Indie Game, and Best Original Score.

The MonoGame team had been relatively quiet for the most part of 2011 as they were building their platform, but they got into a good release cadence with the MonoGame 2.0 release in October, when they launched as a cross-platform engine, followed up with a tasty 2.1 release only two weeks ago.

With the addition of OpenGL ES 2.0 support and 3D capabilities to MonoGame, 2012 looks like it will be a great year for the project.

Gtk+

Since MonoDevelop is built on top of the Gtk+ toolkit and since it was primarily a Unix toolkit there have been a few rough areas for our users in both Mac and Windows.

This year we started working with the amazing team at Lanedo to improve Gtk+ 2.x to work better on Mac and Windows.

The results are looking great and we want to encourage developers to try out our new Beta version of Mono, which features the updated Gtk+ stack.

This new Gtk+ stack solves many of the problems that our users have reported over the past few months.

Hosting Bills

I never tracked Mono downloads as I always felt that tracking download numbers for open source code that got repackaged and redistributed elsewhere pointless.

This summer we moved the code hosting from Novell to Xamarin and we were surprised by our hosting bills.

The major dominating force are binaries for Windows and MacOS which are communities that tend not to download source and package the software themselves. This is the breakdown for completed downloads (not partial downloads, or interrupted ones) for our first month of hosting of Mono:

  • 39,646 - Mono for Windows (Runtime + SDK)
  • 27,491 - Mono for Mac (Runtime)
  • 9,803 - Mono for Windows (Runtime)
  • 9,910 - Mono for Mac (Runtime + SDK)

  • Total: 86,850 downloads for Windows and Mac

These numbers are only for the Mono runtime, not MonoDevelop, the MonoDevelop add-ins or any other third party software.

It is also worth pointing out that none of our Windows products (MonoDevelop for Windows, or Mono for Android on Windows) use the Mono runtime. So these downloads are for people doing some sort of embedding of Mono on their applications on Windows.

At this point, we got curious. We ran a survey for two days and collected 3,949 answers. These is the summary of the answers:

What type of application will you run with Mono?

This one was fascinating, many new users to the .NET world:

The best results came form the free-form answers in the form. I am still trying to figure out how to summarize these answers, they are all very interesting, but they are also all over the map.

Some Key Quotes

When I asked last week for stories of how you used Mono in 2011, some of you posted on the thread, and some of you emailed me.

Here are a couple of quotes from Mono users:

I can't do without Mono and I don't just mean the iOS or Android dev with C# but MonoMac and Mono for *nix too. Thanks for everything; from the extraordinary tools to making hell turn into heaven, and thank you for making what used to be a predicament to effortless development pleasure.

I don't think we could have achieved our solutions without Mono in enterprise mobile development. It addresses so many key points, it is almost a trade secret. We extensively use AIR and JavaScript mobile frameworks too but ultimately we desperately need 1-to-1 mapping of the Cocoa Touch APIs or tap into low level features which determines our choice of development platform and frameworks.

That's where Mono comes in.

Gratefulness and paying polite respects aside, the key tenets of Mono we use are:

  • shared C# code base for all our enterprise solutions - achieving the write once, compile everywhere promise with modern language and VM features everyone demands and expects in this century
  • logical, consistent and self-explanatory wrapper APIs for native services - allows us to write meta APIs of our own across platforms
  • low latency, low overhead framework
  • professional grade IDE and tools
  • native integration with iOS tools and development workflow
  • existence of satisfactory documentation and support
  • legal clarity - favorable licensing options
  • dedicated product vision via Xamarin - commercial backing
  • community support

Koen Pijnenburg shared this story with me:

We've been in touch a few times before and would like to contribute my story. It's not really an interesting setup, but a real nice development for Mono(Touch). I've been developing app for iPhone since day 1, I was accepted in the early beta for the App Store. On launch day july 2008, 2 of the 500 apps in the App Store were mine, my share has decreased a lot in the past years ;)

I really, really, really like football(soccer), maybe you do also, I don't know. In september 2008 I created the first real soccer/football stats app for the iPhone called My Football. This was a huge succes, basically no competition at that time. In 2009 I released My Football Pro, an app with 800 leagues worldwide, including live data for more then 100 leagues. Since then I created lots of apps, all created with the iPhone SDK and with Objective-C.

Since the launch of MonoTouch, it merged the best of two worlds in my opinion. I've been a Mono/.NET developer for years before the iPhone apps, for me it was love at first line of code.

The last year I've increased my work with MonoTouch / Droid /MonoGame(Poppin' Frenzy etc ;)), and focused less on working with native SDK's only. Since our My Football apps are at the end of their lifecycle in this form, we are working on a new line of My Football apps. Our base framework supporting our data, is built with Mono, and the apps UI will be built with MonoTouch / MonoDroid / WP7 etc.

Included is the screenshot of our first app built with the framework, My Football Pro for iPad. It has a huge amount of data, stats / tables / matches / live data for more then 800 leagues worldwide. We think it's a great looking app!

Working with MonoTouch is fantastic and just wanted you to know this!

Mono on Mainframes

This year turned out to show a nice growh in the deployment of Mono for IBM zSeries computers.

Some are using ASP.NET, some are using Mono in headless mode. This was something that we were advocating a few years ago, and this year the deployments went live both in Brazil and Europe.

Neale Ferguson from Sinenomine has kept the zSeries port active and in shape.

Mono and ASP.NET

This year we delivered enough of ASP.NET 4.0 to run Microsoft's ASP.NET MVC 3.

Microsoft ASP.NET MVC 3 is a strange beast. It is licensed under a great open source license (MS-PL) but the distribution includes a number of binary blobs (the Razor engine).

I am inclined to think that the binaries are not under the MS-PL, but strictly speaking, since the binaries are part of the MS-PL distribution labeled as such, the entire download is MS-PL.

That being said, we played it safe in Mono-land and we did not bundle ASP.NET MVC3 with Mono. Instead, we provide instructions on how users can deploy ASP.NET MVC 3 applications using Razor as well as pure Razor apps (those with .cshtml extensions) with Mono.

2012, the year of Mono 2.12

2012 will be a year dominated by our upcoming Mono release: Mono 2.12. It packs a year worth of improvements to the runtime, to our build process and to the API profiles.

Mono 2.12 defaults to the .NET 4.x APIs and include support for .NET 4.5.

This is going to be the last time that we branch Mono for these extended periods of time. We are changing our development process and release policies to reduce the amount of code that is waiting on a warehouse to be rolled out to developers.

ECMA

We wrapped up our work on updating the ECMA CLI standard this year. The resulting standard is now at ISO and going through the standard motions to become an official ISO standard.

The committee is getting ready for a juicy year ahead of us where we are shifting gears from polish/details to take on significant extensions to the spec.

Posted on 21 Dec 2011 by Miguel de Icaza

CXXI: Bridging the C++ and C# worlds.

The Mono runtime engine has many language interoperability features but has never had a strong story to interop with C++.

Thanks to the work of Alex Corrado, Andreia Gaita and Zoltan Varga, this is about to change.

The short story is that the new CXXI technology allows C#/.NET developers to:

  • Easily consume existing C++ classes from C# or any other .NET language
  • Instantiate C++ objects from C#
  • Invoke C++ methods in C++ classes from C# code
  • Invoke C++ inline methods from C# code (provided your library is compiled with -fkeep-inline-functions or that you provide a surrogate library)
  • Subclass C++ classes from C#
  • Override C++ methods with C# methods
  • Expose instances of C++ classes or mixed C++/C# classes to both C# code and C++ as if they were native code.

CXXI is the result of two summers of work from Google's Summer of Code towards improving the interoperability of Mono with the C++ language.

The Alternatives

This section is merely a refresher of of the underlying technologies for interoperability supported by Mono and how developers coped with C++ and C# interoperability in the past. You can skip it if you want to get to how to get started with CXXI.

As a reminder, Mono provides a number of interoperability bridges, mostly inherited from the ECMA standard. These bridges include:

  • The bi-directional "Platform Invoke" technology (P/Invoke) which allows managed code (C#) to call methods in native libraries as well as support for native libraries to call back into managed code.
  • COM Interop which allows Mono code to transparently call C or C++ code defined in native libraries as long as the code in the native libraries follows a few COM conventions [1].
  • A general interceptor technology that can be used to intercept method invocations on objects.

When it came to getting C# to consume C++ objects the choices were far from great. For example, consider a sample C++ class that you wanted to consume from C#:

class MessageLogger {
public:
	MessageLogger (const char *domain);
	void LogMessage (const char *msg);
}

One option to expose the above to C# would be to wrap the Demo class in a COM object. This might work for some high-level objects, but it is a fairly repetitive exercise and also one that is devoid of any fun. You can see how this looks like in the COM Interop page.

The other option was to produce a C file that was C callable, and invoke those C methods. For the above constructor and method you would end up with something like this in C:

/* bridge.cpp, compile into bridge.so */
MessageLogger *Construct_MessageLogger (const char *msg)
{
	return new MessageLogger (msg);
}

void LogMessage (MessageLogger *logger, const char *msg)
{
	logger->LogMessage (msg);
}

And your C# bridge, like this:

class MessageLogger {
	IntPtr handle;

	[DllImport ("bridge")]
	extern static IntPtr Construct_MessageLogger (string msg);

	public MessageLogger (string msg)
	{
		handle = Construct_MessageLogger (msg);
	}

	[DllImport ("bridge")]
	extern static void LogMessage (IntPtr handle, string msg);

	public void LogMessage (string msg)
	{
		LogMessage (handle, msg);
	}
}

This gets tedious very quickly.

Our PhyreEngine# binding was a C# binding to Sony's PhyreEngine C++ API. The code got very tedious, so we built a poor man's code generator for it.

To make things worse, the above does not even support overriding C++ classes with C# methods. Doing so would require a whole load of manual code, special cases and callbacks. The code quickly becomes very hard to maintain (as we found out ourselves with PhyreEngine).

This is what drove the motivation to build CXXI.

[1] The conventions that allow Mono to call unmanaged code via its COM interface are simple: a standard vtable layout, the implementation of the Add, Release and QueryInterface methods and using a well defined set of types that are marshaled between managed code and the COM world.

How CXXI Works

Accessing C++ methods poses several challenges. Here is a summary of the components that play a major role in CXXI:

  • Object Layout: this is the binary layout of the object in memory. This will vary from platform to platform.
  • VTable Layout: this is the binary layout that the C++ compiler will use for a given class based on the base classes and their virtual methods.
  • Mangled names: non-virtual methods do not enter an object vtable, instead these methods are merely turned into regular C functions. The name of the C functions is computed based on the return type and the parameter types of the method. These vary from compiler to compiler.

For example, given this C++ class definition, with its corresponding implementation:

class Widget {
public:
	void SetVisible (bool visible);
	virtual void Layout ();
	virtual void Draw ();
};

class Label : public Widget {
public:
	void SetText (const char *text);
	const char *GetText ();
};

The C++ compiler on my system will generate the following mangled names for the SetVisble, Layout, Draw, SetText and GetText methods:

__ZN6Widget10SetVisibleEb
__ZN6Widget6LayoutEv
__ZN6Widget4DrawEv
__ZN5Label7SetTextEPKc
__ZN5Label7GetTextEv

The following C++ code:

	Label *l = new Label ();
	l->SetText ("foo");
	l->Draw ();	

Is roughly compiled into this (rendered as C code):

	Label *l = (Label *) malloc (sizeof (Label));
	ZN5LabelC1Ev (l);   // Mangled name for the Label's constructor
	_ZN5Label7SetTextEPKc (l, "foo");

	// This one calls draw
	(l->vtable [METHOD_PTR_SIZE*2])();

For CXXI to support these scenarios, it needs to know the exact layout for the vtable, to know where each method lives and it needs to know how to access a given method based on their mangled name.

The following chart explains shows how a native C++ library is exposed to C# or other .NET languages:

Your C++ source code is compiled twice. Once with the native C++ compiler to generate your native library, and once with the CXXI toolchain.

Technically, CXXI only needs the header files for your C++ project, and only the header files for the APIs that you are interested in wrapping. This means that you can create bindings for C++ libraries that you do not have the source code to, but have its header files.

The CXXI toolchain produces a .NET library that you can consume from C# or other .NET languages. This library exposes a C# class that has the following properties:

  • When you instantiate the C# class, it actually instantiates the underlying C++ class.
  • The resulting class can be used as the base class for other C# classes. Any methods flagged as virtual can be overwritten from C#.
  • Supports C++ multiple inheritance: The generated C# classes expose a number of cast operators that you can use to access the different C++ base classes.
  • Overwritten methods can call use the "base" C# keyword to invoke the base class implementation of the given method in C++.
  • You can override any of the virtual methods from classes that support multiple inheritance.
  • A convenience constructor is also provided if you want to instantiate a C# peer for an existing C++ instance that you surfaced through some other mean.

This is pure gold.

The CXXI pipeline in turn is made up of three components, as shown in the diagram on the right.

The GCC-XML compiler is used to parse your source code and extract the vtable layout information. The generated XML information is then processed by the CXXI tooling to generate a set of partial C# classes that contain the bridge code to integrate with C++.

This is then combined with any customization code that you might want to add (for example, you can add some overloads to improve the API, add a ToString() implementation, add some async front-ends or dynamic helper methods).

The result is the managed assembly that interfaces with the native static library.

It is important to note that the resulting assembly (Foo.dll) does not encode the actual in-memory layout of the fields in an object. Instead, the CXXI binder determines based on the ABI being used what the layout rules for the object are. This means that the Foo.dll is compiled only once and could be used across multiple platforms that have different rules for laying out the fields in memory.

Demos

The code on GitHub contains various test cases as well as a couple of examples. One of the samples is a minimal binding to the Qt stack.

Future Work

CXXI is not finished, but it is a strong foundation to drastically improve the interoperability between .NET managed languages and C++.

Currently CXXI achieves all of its work at runtime by using System.Reflection.Emit to generate the bridges on demand. This is useful as it can dynamically detect the ABI used by a C++ compiler.

One of the projects that we are interested in doing is to add support for static compilation, this would allow PS3 and iPhone users to use this technology. It would mean that the resulting library would be tied to the platform on which the CXXI tooling was used.

CXXI currently implements support for the GCC ABI, and has some early support for the MSVC ABI. Support for other compiler ABIs as well as for completing the MSVC ABI is something that we would like help with.

Currently CXXI only supports deleting objects that were instantiated from managed code. Other objects are assumed to be owned by the unmanaged world. Support for the delete operator is something that would be useful.

We also want to better document the pipeline, the runtime APIs and improve the binding.

Posted on 19 Dec 2011 by Miguel de Icaza

2011: Tell me how you used Mono this year

I have written a summary of Mono's progress in the year 2011, but I want to complement my post with stories from the community.

Did you use Mono in an interesting setup during 2011? Please post a comment on this post, or email me the story and tell me a little bit about it.

Posted on 15 Dec 2011 by Miguel de Icaza

Porto Alegre

We are traveling to Porto Alegre in Brazil today and will be staying in Brazil until January 4th.

Ping me by email (miguel at gnome dot org) if you would like to meet in Porto Alegre to talk hacking, Mono, Linux, open source, iPhone or if you want to enlighten me about the role of scrum masters as actors of social change.

Happy holidays!

Posted on 14 Dec 2011 by Miguel de Icaza
« Newer entries | Older entries »
This is a personal web page. Things said here do not represent the position of my employer.