Mammock new mocking framework

by Bjørn Bouet Smith 16. February 2012 19:41

I have decided to fork Rhino Mocks and continue development of it under another name: Mammock.

Don't ask me how I came up with the name, it's kind of silly. A combination of Mammoth and Mock, but it turns out that mammock actually means something that is a fragment of something else, and you could say that a mock is a part of something else, which is the piece of code you want to test and thus Mammock is not such a bad name after all Cool

The plans for Mammock are:

  • Upgrade to .NET 4
  • Upgrade to Castle 3
  • Get rid of old style mocking and stear codebase towards pure AAA
  • Make the codebase smaller by providing a much smaller and leaner interface
  • Go through code base and optimise and fix bugs that have been filed against Rhino Mocks.

 

I have already updated the code to .NET 4 and Castle 3, but I will still have it as a goal to see if there are reflection or other code that has been made that can be done more efficiently using .NET 4.

I am thinking about giving the version number of Mammock .1 higher than the .NET framework it it targeting, so Mammock for .NET 4, would be Mammock 4.1 and so forth. But that is just an idea.

I plan on making Mammock compatible with Rhino Mocks in the sense that if you are using the following style:

MockRepository.GenerateStub<T>, MockRepository.GenerateMock<T> etc. i.e. the static methods.

Then you should be able to switch in Mammock without changing anything but the namespace.

You can find the codebase at github

Tags: , ,

.NET | Rhino Mocks | Mammock

Configuration file magic via Smith.BuildExtensions

by Bjørn Bouet Smith 3. February 2012 21:00

I am sure everyone have had the "pleasure" of having to maintain configuration files across projects and even solutions, only to copy and paste configuration data between them, to keep them in sync, and have had the same issues that everyone else has had. i.e. Missing variables in one project, missing sections and so forth.

So have I, and in my previous work we used Nant and a custom built script to transform our app.config and web.config into the correct version for the given target we were building.

In a new work we are having the same exact problem, surprise Smile - and instead of "poluting" our code base with nant. (We are running TFS, so nant does not fit well in that) - I decided to build my own MSBuild Task that could do basically the same, i.e. transform a configuration template, exchanging "tokens" or variables with configured elements or values from one or many configuration files.

I have done that now, and you can see it all in its simple splendor at codeplex.

But basically you add a little stuff to the project files of the projects where you want to use the configuration sharing and transformation, and you create your templates for your app.config and web.config and a few files for the variables and the next time you build, you will get a configuration files that matches the Build Target you selected in visual studio - with warnings and errors in the Error List if you have missing configuration variables for a given build target.

The following information is copied from codeplex, where you can see more elaborate examples.

To start using the Smith Build extensions is really easy, simply download the code, build it and copy the Smith.BuildExtensions.dll to a directory of your choosing.


Then either create or copy the provided examples of config files and put those in another directory of your choosing.

Then you need to change all project files that you want transformations for.

Add the following line to the project that you want to have configuration transformations in:

<UsingTask TaskName="ConfigTransformTask" 
AssemblyFile="Smith.BuildExtensions.dll" />


But remember to change the AssemblyFile attribute to point to where you put the compiled Smith.BuildExtensions.dll file.


Uncomment the
<Target Name="BeforeBuild">


target and add the following to the target:

<Target Name="BeforeBuild">
   <ConfigTransformTask ConfigBaseDir="..\Configs" 
ConfigTemplate="App.config.base.config" 
Configuration="$(Configuration)" Outputfile=".\App.config" />
</Target>


Where the ConfigBaseDir is where you have placed the app.config and web.config templates and the build specific settings files.

ConfigTemplate is the name of the template to use for transformation, i.e. if you are doing this in a web project choose your web.config.base.config file, and the app.config.base.config file if its a normal project or test project.

The OutputFile attribute controls what filename to write the file to, i.e. again for a web project use Web.config and App.config for other projects.

To see a full project file example, head over to the Project file example page

To see how to create the xml configuration files, head over to the Xml examples page.

I hope whoever reads this will find it just as exiting that I do, and will be a happy user of it Laughing

Tags: , ,

.NET | c# | Configuration

Updates to the memcached client

by Bjørn Bouet Smith 14. September 2011 13:26

New updates is available for my memcached client.

I decided to release this update as a proper release in codeplex, since the client contains the features it really needs now.

Features

  • Server monitor that will monitor memcached server nodes and remove them from the cluster if they are dead, but re add them as soon as they become available again.
  • MultiGet implemented, so now you can ask for more than one key at a time. Only caveat to that is that the values have to be of the same type.
  • Gets has been implemented so you can get that CAS value to be used for Check and Set operations
  • Set operation has been implemented so you can unconditionally overwrite values in the memcached server.
  • Performance counters have been implemented, so you can see how busy your server is with doing memcached operations and how long it takes.

You can download the new release at: http://asyncmemcached.codeplex.com/releases/view/73320

 

 

Tags: , ,

.NET | c# | memcached

restful.net

by Bjørn Bouet Smith 24. April 2011 14:55

Having tried out several web development frameworks, and service frameworks while building restful services, I found that none of them were really suited for the job.

So I decided to build a very simple framework that is intended to make REST services and nothing else. Its not a RPC framework, its meant to be used for REST.

Let me give a very brief overview of why I thought the already established frameworks is not good enough.

MVC is simply too weird for my taste, first of all it uses more or less "automagic" mapping of methods in a controller to the verbs being used. I do not like that, I like to be in absolute control. Secondly you have to return an ActionResult instance from your methods that is wrong in my opinion and hides the real intent of the methods, i.e. it makes much more sense to return the objects that your method found. I think MVC is more meant to build websites and not web services or even REST services.

MVC's async implementation is laughable, seriously who thought up the silly way that you have to incment async operations, why not simply go with the standard BeginXX/EndXX methodology instead of making something really weird. I guess its because real async is kind of hard to wrap your head around.

I have also tried out both WCF and WCF HTTP, which is the next gen version of WCF that is tailored to build web services over http.

WCF and WCF HTTP is pretty good, first of all, its a service framework, its built with services in mind. Its very extensible, although it can be hard to find the exact place to extend if you want to change a particular behaviour. WCF supports asynchronous operations out of the box. You do not have to return a weird result object, but can return whatever you please, and object or void.

The only real reasons why WCF did not cut it with me, was of two simple reasons. You cannot build hiearchical rest services with WCF, i.e. you cannot have a /addressbook/{addressbookid} and let that be served by one class, and then have /addressbook/{addressbookid}/contacts be served by another class. All access to the same root must be served by the same service, which require you to have _ALL_ your methods in one service, which is bad. The other reason is that its not very easy to exchange the serializer of WCF, in fact its so hard, that I do not think the guys that made the framework ever wanted someone to exchange the serializers.

WCF HTTP comes with a nice feature where it looks at the Accept-Types header of the request and serves the correct content type, but if you start tweaking with your own serializers, i.e. lets say you do not like the JsonDataContractSerializer, like so many people does not, and inject your own, then you loose that functionality and have to build that as well.

I also briefly looked at the OpenRasta framework, which looks awesome and supports everything you would ever need, except it does not support asynchronous services, so you loose some scalability if you use that.

All that being said, I decided to build my own simple framework that tries to do all that I needed and its actually very simple to use.

It sill lacks a few features, not something you cannot built yourself into your service implementation but something that will come in time.

I have called my framework restful.net and you can find it at restful.net.

Restful.net supports the following features so far:

 

  • Automatic content type detection and serving of the requested content type
  • Supports asynchronous and synchronous api
  • Non intrusive, you can use any class as a REST service
  • Simple configuration, only add one http handler and configure the routes and you are good to go
  • You can return object instances from your services and the framework will handle serialization
  • Built in support for ETag / If-None-Match for proxy/browser caching capabilities
  • Plugs into an IOC container easily, so you can extend your REST services as you like

 

Features missing so far:

 

  • Authentication support natively
  • Logging support

 

The missing features is something  you can easily build into the REST service yourself by using interceptors or even just checking the auth headers in your methods, but it is something that should be part of the framework, so that kind of boiler plate code does not clutter your business logic.

To show how easy it is to build a REST service with the framework, I have implemted a Test REST service that is part of the code on codeplex.

Try it out and let me hear what you think :)

Tags: , , ,

.NET | asp.net | c# | REST

Thread safe version of Rhino Mocks

by Bjørn Bouet Smith 26. March 2011 00:25

If you are a happy user of Rhino Mocks like I am, you might have experienced the same issue that I noticed this week at work.

We recently ported our entire code base to TFS, and decided to change the unit testing framework to MSTest so we could get a better integration with TFS.

We were a bit unhappy with the build times, so I started looking into running tests in parallel.

Luckily its very easy with MSTest, you simply add a parameter to your Local.testsettings file and you can run up to 5 tests in parallel, which is very nice, since most of us do in fact have multi core processors.

But when we changed the setting, we started seeing random test errors, one test run two tests were failing, another test run they were fine, and every sinle time you ran the test alone they worked perfectly. 

So we figured out it was caused by running multiple tests in the same test fixture in parallel, in conjunction with Rhino Mocks.

At first we thought it was MSTest that somehow was sharing state between tests in the same test fixture, but after a few test we found out that it was in fact Rhino Mocks that was not being built thread safe.

There were two issues, first there was a race condition that happens when you create a stub or mock.

Internally Rhino keeps a dictionary of Type and a proxy generator, so multiple calls to create the same type will have speed benefit. Unfortunately the code was not written in a thread safe way:

if (!generatorMap.ContainsKey(type))
{
    generatorMap[type] = new ProxyGenerator();
}

return generatorMap[type];

So what happens when two threads access this piece of code is that there is a high chance that they both will enter the body of the if statement and try to add the same key to the dictionary and thus fail. This is what we were seeing.

The generatorMap variable is a static variable in the class MockRepository, and as such thats fine, but since multiple threads can access it, there needs to be a guard in place to prevent both threads from trying to add the same key to the dictionary.

I created a small patch for this, not by adding a lock statement, but by simply adding a [ThreadStatic] attribute to the generatorMap variable. 

[ThreadStatic]
private static IDictionary<Type, ProxyGenerator> generatorMap;

For those that do not know what thread static means, let me explain it briefly. TheadStatic means that each thread will get its own instance of the variable, and in that way remove the race condition, since only one thread will access the variable at the same time. The variable will still be static, so several calls to MockRepository will still benefit from the map, but multiple threads will not muck things up for each other. 

One caveat with ThreadStatic is that each thread will have to initialize the member variable, otherwith it will only be the first thread that will get an instance that is not null, so I added a call in the constructor to instantiate the variable if it was not null.

Okay, one problem solved.

Another came up. 

Apparently there was another static variable causing issues. MockRepository holds a reference to the current Repository, which is bad, since it will prevent multi threading from working all together, since multiple threads will change the behaviour of each others mocks and cause tests to fail in the weirdest way. Luckily that was an easy fix, I just added a ThreadStatic to the variable and presto everything started working as expected.

[ThreadStatic]
internal static MockRepository lastRepository;

Luckily for you you do not have to do the same I just did. 

I downloaded the code from the source repository at git, and fixed the code.

I have put it up for download here. 

Just build the code using the instructions in the "How to build.txt" file, and you will get a nice Thread Safe Rhino Mocks dll.

ayende-rhino-mocks-0f0f055.zip (9.93 mb)

Some might ask why I did not submit a patch to the author himself, and I did not do that because it seems like he have abandoned the project entirely. No code has been checked into the project in over a year.

Tags: , ,

.NET

Efficient buffering with BufferManager

by Bjørn Bouet Smith 22. January 2011 00:42

When tasked with writing code that does i/o to read data into a application for further processing, it is normal that a buffer is created that will hold the chunks of data while data is being transferred from the client/disk or what ever medium the data is coming from.

It is not uncommon to find code similar to the example below.

byte[] buffer = new byte[requestSize];
stream.BeginRead(buffer, 0, requestSize, OnReadComplete, null);

 

While the code above is okay if your application is not very busy, it might be an issue if you have to process a large amount of requests at the same time or in rapid sucession.

The reason for this is that you with the code above allocates a buffer to hold the data, and that buffer has to be allocated, objects larger than 85k is allocated on the large object heap, and if you allocate a lot of different sized objects your large object heap will be fragmented and might lead to out of memory exceptions.

There are a couple of solutions to prevent this issue.

One is to do your own "memory" management and preallocate 10 large byte arrays and reference those from where you need them, and simply re use them as needed. This will prevent a lof of arrays being created and prevent the fragmentation, since those 10 arrays will stay on the same position on the large object heap, thus preventing the fragmentation.

An easier solution is to use the BufferManager class that was introduced with WCF.

The BufferManager class handles the issue with pre allocating chunks of memory and your application simply requests a chunk of memory and returns it when its done with it.

Rather simple

// Create buffer manager with a max size of 1MB and a max buffer size of 100k
BufferManager bufferManager = BufferManager.CreateBufferManager(1000000, 100000);

// Request a buffer
byte[] buffer = bufferManager.TakeBuffer(100000);

// work with the buffer
stream.BeginRead(buffer, 0, buffer.Length, OnReadComplete, null);
// Release the buffer 
bufferManager.ReturnBuffer(buffer);

 

Not only will the buffer manager help migitate the problem with memory fragmentation, it is also much faster to get a preallocated buffer than allocating a buffer each time you need it.

I  created a very simple and not very realistic test, to show the difference. The first example uses allocation of the buffers as needed.

Stopwatch watch = new Stopwatch();
watch.Start();

for (int x = 0; x < 1000000; x++)
{

    byte[] buffer = new byte[100000];
    for (int y = 0; y < 1000; y++)
    {
        buffer[y] = (byte)(y % 4);
    }

}

Console.WriteLine(watch.ElapsedMilliseconds);

On my computer this takes 7541 seconds on average to run.

The next example uses the buffer manager but is doing the exact same "work".

Stopwatch watch = new Stopwatch();
watch.Start();
BufferManager bufferManager = BufferManager.CreateBufferManager(100000, 100000);
for (int x = 0; x < 1000000; x++)
{

    byte[] buffer = bufferManager.TakeBuffer(100000);
    for (int y = 0; y < 1000; y++)
    {
        buffer[y] = (byte)(y % 4);
    }
    bufferManager.ReturnBuffer(buffer);
}

Console.WriteLine(watch.ElapsedMilliseconds);

This example only takes 1390 milliseconds on average to run, thats more than 5 times as fast. Just to allocate the memory.

In real world programs you would not only be allocating memory and doing nothing with it, so the relative performance improvements by switching to using the buffermanager will not be as great as the total time spent allocating memory is probably very low, unless you have a lot of garbage collection going on because of a lot of objects being created and destroyed.

But taking both benefits into considerations, I think it's definately worth using instead of manually allocating buffers to hold your temporary data.

Tags: , ,

.NET | c# | memory

Updates to the asyncronous memcached client

by Bjørn Bouet Smith 21. September 2010 01:18

New updates is available for my memcached client.

 

  • Server monitoring is in place, i.e. if a server node goes down or several requests fail for a given node.  
  • Logging framework has been added, so useful log statements can be added.
Coming updates are:

  • Actually using the information added by the server monitor, to remove a node when it is marked as dead and reintroduce it again, if and when it is marked as alive again.
  • Implement Set - I don't know how I could forget this in the first version, but it's very simple to implement with the current implementation.
  • Implement MultiGet - so you can save a few precious roundtrips if you are lucky enough that all your keys end up on the same server node.
  • Implement stats operation - so you can get some usefull statistics back from the server.

 

Anyway, check it out at: http://asyncmemcached.codeplex.com

If anyone out there is actually using the client or considering it, please let me know, I would really like some feedback.

Tags: , ,

.NET | c# | memcached

Reading stuctured files into SQL Server Part 2

by Bjørn Bouet Smith 14. September 2010 20:05

My last post presented how you can read a file in a structured format into memory for further processing.

This post will focus on how you easily can transport the contents you just imported into SQL server.

If you want to data in bulk into SQL Server, then the most efficient way of doing that is to use the class System.Data.SqlClient.SqlBulkCopy.

There are two ways you can use SqlBulkCopy, either you give it a DataTable instance with the data represented in the same format and order as the table in the database, or you give it an IDataReader instance, that provides access to the data in the same format as the DataTable would do.

Both methods work just fine, but if you want high performance and efficiency you should not use a DataTable since it will require you to build up a DataTable object, transform your data into a row format, which is inefficient. The most efficient way is to implement an IDataReader on top of your data that you want to import. Naturally if you had to implement your IDataReader instance yourself, then the DataTable approach would probably be faster, since its very easy to understand and most people have used a DataTable before. But lets say you want to insert 1billion rows, then you face the issue that your DataTable simply cannot hold 1billion rows, so you would have to create several instances of a DataTable with chunks of data, which would use up a lot of memory anyway, and furthermore create a lot of objects that would have to be collected by the garbage collector.

By using an IDataReader you only have to provide one row at a time to the SqlbulkCopy class, and you can easily re-use your internal row representation for each instance of the row - this makes it very efficient both in terms of performance since you create less objects, and move less data into memory at the same time. Furthermore the fewer objects you create causes less garbage collection to happen, which is good, since the entire application grinds to a halt each time the garbage collector kicks in.

Now less words and more code, I have created a few classes that help with the IDataReader implementation that I have made.

 

  • FileDataColumn - A class that is used to describe the format in the record you try to load into the IDataReader.
  • FileDataRecord - An IDataRecord implementation with the possibility to also set the values of the record, not only read data from it.
  • FileDataReader - An IDataReader implementation that uses the FileRecordReader from my last post to provide forward only access to each record as an IDataRecord.

 

 

The FileDataColumn class only contains two properties. ColumnName and ColumnType, which is kind of obvious what they are used to, so I will not go into any detail on that class.

The FileDataReader takes a few arguments in its constructor that will enable it to read the data and provide a nice interface to it.

 

/// <summary>
/// Initializes a new instance of the <see cref="FileDataReader"/> class.
/// </summary>
/// <param name="fileStream">The file stream.</param>
/// <param name="columns">The columns describing the format of the stream for a single record.</param>
/// <param name="recordSeparator">The record separator.</param>
/// <param name="fieldSeparator">The field separator.</param>
/// <param name="fileEncoding">The file encoding.</param>
/// <param name="recordManipulator">The record manipulator.</param>
public FileDataReader(Stream fileStream, 
FileDataColumn[] columns, 
char recordSeparator, 
char fieldSeparator, 
Encoding fileEncoding,
Action<FileDataRecord> recordManipulator)

 

First argument is the stream where the data is located. In real world scenarios this would be a FileStream variant that would point to the file you want to read - this filestream will be passed onto the FileRecordReader instance that the constructor creates.

Second argument is an array of FileDataColumn objects that describes the record format of the file. They must be in the same order as the fields in the file.

Third argument is the record separator character, i.e. the character that separates the records from each other in the file.

Fourth argument is the field separator character, i.e. the character that separates the fields in the file.

Fifth argument is the encoding of the file, which is important in particular if you want to read text.

Last argument is an action that will be called before each call to Read returns, which will give you an opportunity to modify the data before its being passed onto whatever reads from the reader.

You use the FileDataReader as you would use any other IDataReader, by invoking the Read() Method that will return a bool indicating whether or not the reader was positioned at the next record or not.

i.e. 

 

IDataReader dataReader = new FileDataReader(s, cols, '\n', ',', Encoding.Unicode);

while (dataReader.Read())
{
    string fieldValue = (string)dataReader["field"];
    int fieldValue2 = (int)dataReader[2];
}

And so forth - the beauty of it is that if you do not want to do any processing you can just give the SqlBulkCopy the instance of the FileDataReader and you don't have to do any more work what so ever.

If you need to manipulate each record, you simply provide an Action to the FileDataReader i.e.

Stream s = new MemoryStream(1000);
for (int x = 0; x < 10; x++)
{
    AddRecordToStream(s, string.Format("{0}\n", (x * 10)));
}
s.Position = 0;
FileDataColumn[] cols = new[] 
{ 
    new FileDataColumn { ColumnName = "First", ColumnType = typeof(int) } 
};

IDataReader dataReader = new FileDataReader(
    s,
    cols,
    '\n',
    ';',
    Encoding.Unicode,
    record =>
    {

        int currentValue = record.GetInt32(0);
        record.SetValue(0, currentValue * 2);
    });

for (int x = 0; x < 10; x++)
{
    dataReader.Read();
    Assert.That(dataReader[0], Is.EqualTo(x * 10 * 2), x.ToString());

}

Nice and easy if you ask me Laughing - naturally you could easily extend and improve my FileDataReader implementation, but this will give you a hint on how you efficiently can read a file into SQL Server if you need to.

To use this reader together with SqlBulkCopy you simply create an instance of the FileDataReader and use it like below:

 

using (SqlBulkCopy bulkCopy =
                new SqlBulkCopy(destinationConnection))
{
    bulkCopy.DestinationTableName =
        "dbo.DestinationTable";

    try
    {
        bulkCopy.WriteToServer(reader);
    }
    catch (Exception ex)
    {
        Console.WriteLine(ex.Message);
    }
    finally
    {
        reader.Close();
    }
}

 

I have attached the entire source code project for both this post and the previous one, including integration tests that will show how to use the code.

I hope you enjoy using it, I certainly enjoyed writing the code.

Any questions, post a comment or leave feedback.

FileDataReader.zip (14.44 kb)

kick it on DotNetKicks.com

Tags: , , ,

.NET | c# | SQL Server

Asyncronous memcached client

by Bjørn Bouet Smith 1. June 2010 09:25

I have been working with distributed caching for about 4 years now, using memcached as the only server.

I have been trying out different memcached clients, and some have been good, others bad.

They have all had the same problem: They have been syncronous implemented, i.e. they have been wasting a lot of theads on simple waits.

I have started a project to create a fully asyncronous memcached client in .NET.

Check out:

http://asyncmemcached.codeplex.com/

Its not production code yet, but its a fully working client, for gets/sets. It just needs some additional features, and I will release a version.

Tags: , ,

.NET | c# | memcached

Error 0x80005000 when using Directory Services in .NET

by Bjørn Bouet Smith 8. January 2009 13:41

I am currently developing a deployment tool to help me do easy deployments of websites to many web servers at the same time.

To do this I am using a combination of WMI and Directory services in .NET.

When I tried out the tool on our production environment I got some COM exceptions.

Naturally I started looking at my code, trying different approaches, but to no avail.

I then later found our that to use the directory services together with IIS, you need to have IIS installed on the machine you are running the code from.

Even if you do not manipulate the local machine.

So e.g. a directory URL called IIS://machinename/W3SVC will not work, unless you install IIS on the local machine from where you run the code.

If IIS is not installed you will get an error like:

 System.Exception: System.Runtime.InteropServices.COMException (0x80005000): Unknown error (0x80005000)
   at System.DirectoryServices.DirectoryEntry.Bind(Boolean throwIfFail)
   at System.DirectoryServices.DirectoryEntry.Bind()


The code I was using was:

Example:
string issMetaBasePath = "IIS://Server/W3SVC";
using (DirectoryEntry dir = new DirectoryEntry(iisMetaBasePath))
            {
                foreach (DirectoryEntry de in dir.Children)
                {

 

 

Tags: , ,

.NET | c# | Directory Services

About me

Even though I have been working with programming for 15 years now, I still get amazed of how little I know :)

That is one of the great things in computers, there are always someone better than you. Someone you can ask for help.

Follow me on twitter

Ads