Getting familiar with a new code base

Last week, I was looking at the code for a project. I was unfamiliar with the domain, some of the frameworks it utilised, and its architecture. In addition to this, the project documentation is extremely limited or out of date, mainly due to the relative immaturity of the project, and the fast pace of change in development. As it is an open source project, I don’t have access to the other developers, except through the project forum and mailing list.

What makes it difficult

When unfamiliar with the domain, it is not always easy to find the key classes that contain the domain logic. Some domains use relatively abstract or ambiguous terms for names of objects.

It can be difficult to find out how code is executed – for example when the concrete implementations of interfaces are configured through an IoC container and interception.

How I tackled the problem

  • By reviewing any unit tests
  • Reviewing documentation and scanning forums for items of interest
  • Set breakpoints in UI to try and understand lifecycle of application, and identify key classes from call stack. This is useful for understanding how information is retrieved and rendered onto the screen.
  • Noting down any key classes for future examination
  • Identifying patterns and frameworks used for further research
  • Attempt to fix some bugs/small issues to give your probing some concrete direction. You should seek feedback via code review or the committer if open source.
  • The final stage is when you understand the project architecture and idioms, and are able contribute new features and feel comfortable that they integrate well with the rest of the application
Digg This
Reddit This
Stumble Now!
Buzz This
Vote on DZone
Share on Facebook
Bookmark this on Delicious
Kick It on DotNetKicks.com
Shout it
Share on LinkedIn
Bookmark this on Technorati
Post on Twitter
Google Buzz (aka. Google Reader)
Posted in Uncategorized | Comments Off on Getting familiar with a new code base

Copy spreadsheet as values

Mmmm, a VBA post. Hopefully there won’t be too many of these.

Our system has an excel interface, and many models are built on top of this interface. The issue with this is that when the spreadsheet opens, it will refresh the data that is currently in the system. Normally, this is exactly what you want, but sometimes it would be nice to create a snapshot of a model at a certain time. Copy and paste as values is how you achieve this manually, here are some functions that achieve this programatically via VBA, and automatically creating a new workbook.

Public Sub ArchiveWorkbook()
    Set NewWorkbook = CreateWorkBook(ThisWorkbook.Worksheets.Count)
    If SaveWorkbook(ThisWorkbook.FullName) Then
        CopyValuesToDestination
    End If
End Sub

Function CreateWorkBook(wsCount As Integer) As Workbook
    Dim OriginalWorksheetCount As Long
    Set NewWorkbook = Nothing
    If wsCount < 1 Or wsCount > 255 Then Exit Function
    OriginalWorksheetCount = Application.SheetsInNewWorkbook
    Application.SheetsInNewWorkbook = wsCount
    Set NewWorkbook = Workbooks.Add
    Application.SheetsInNewWorkbook = OriginalWorksheetCount
    Set CreateWorkBook = NewWorkbook
End Function

Function SaveWorkbook(originalFileName As String)
    On Error GoTo ErrHandler:
    NewWorkbook.SaveAs Filename:=originalFileName & "_staticCopy"
    SaveWorkbook = True
    Exit Function
    
ErrHandler:
    SaveWorkbook = False
End Function

Function CopyValuesToDestination()
    Dim source As Range
    Dim dest As Range
    Dim sourceWs As Worksheet
    Dim destWs As Worksheet
    Dim wsCounter As Integer
    
    For wsCounter = 1 To ThisWorkbook.Worksheets.Count
        Set sourceWs = Nothing
        Set source = Nothing
        Set dest = Nothing
        Set destWs = Nothing
        Set sourceWs = ThisWorkbook.Worksheets(wsCounter)
        Set source = sourceWs.Range("A1:DA150")
        
        source.Copy
        
        Set destWs = NewWorkbook.Worksheets(wsCounter)
        Set dest = destWs.Range("A1:DA150")
               
        dest.PasteSpecial xlPasteValues
        dest.PasteSpecial xlPasteFormats
        dest.PasteSpecial xlPasteColumnWidths       
        destWs.Name = sourceWs.Name
    
    Next wsCounter
    
End Function
Digg This
Reddit This
Stumble Now!
Buzz This
Vote on DZone
Share on Facebook
Bookmark this on Delicious
Kick It on DotNetKicks.com
Shout it
Share on LinkedIn
Bookmark this on Technorati
Post on Twitter
Google Buzz (aka. Google Reader)
Posted in Uncategorized | Comments Off on Copy spreadsheet as values

Generate a Patch with Mercurial

This is more of a note to myself so I remember the next time I need to do it, but perhaps somebody will find this useful.

If you are generating a patch file for a Mercurial project, it appears that TortoiseHG doesn’t support this. However, like me, I am sure you are not wary of the command line, and we can easily generate a patch file using hg diff via the command

hg diff -c 1020 -g > c:\temp\mypatch.patch

-c specifies the changeset number, and -g specifies using git extended format.

Digg This
Reddit This
Stumble Now!
Buzz This
Vote on DZone
Share on Facebook
Bookmark this on Delicious
Kick It on DotNetKicks.com
Shout it
Share on LinkedIn
Bookmark this on Technorati
Post on Twitter
Google Buzz (aka. Google Reader)
Posted in Uncategorized | Comments Off on Generate a Patch with Mercurial

Making the tedious exciting – Automapper

If you take it as a given truth, that you do not want your presentation layers to be exposed to Domain Objects directly, then you need some way of “mapping” them to a Presentation Model/DTO structure. Developers hate writing this code. It is boring, fragile and laborious to test.

There is a better way! There is a mature project called AutoMapper, developed by Jimmy Bogard.
I watched a short episode of DotNetRocks TV which had me sold.

AutoMapper generally works via convention. In the simplest case, the names of your properties/methods need to match on the source and destination object. So, if you have a “Name” property on your Domain Object and on your Presentation Model, this will be mapped automatically. All you need to do is configure the types that can be mapped together during application initialisation.

Where this convention over configuration is not sufficient, AutoMapper also supports Custom Formatting, Conversion and Resolving, so you can flexibily flatten or project your domain types onto other objects. It seems to be very flexible and it was easy to create the Presentation Model I wanted. The project seems mature enough to cover just about any use case you could have.

A really nice feature is the Configuration Validation checking, which will point out errors or omissions resulting in the mapping from one object to the other, which you can call from a unit test:

[Test]
public void TestMappingsAreGood()
{
     Program.AutoMapperConfiguration();
     Mapper.AssertConfigurationIsValid();
}

I have created a small Visual Studio Solution which contains the early results of my AutoMapper experimentation that I hope will illustrate the frameworks power and ease of use.AutoMapper Example Project.

For further samples, the source includes a Samples Project

Digg This
Reddit This
Stumble Now!
Buzz This
Vote on DZone
Share on Facebook
Bookmark this on Delicious
Kick It on DotNetKicks.com
Shout it
Share on LinkedIn
Bookmark this on Technorati
Post on Twitter
Google Buzz (aka. Google Reader)
Posted in Uncategorized | Comments Off on Making the tedious exciting – Automapper

How I Learned to Stop Worrying and Love the DVCS

Frankly, I have been a touch cynical about the love shown in the .Net community for Git and Mercurial.  Sure, I thought, they might be better than Subversion, but how much better could they be? Surely, this was just effort being wasted on technical toys, when time could be better spent on writing code that solved real customer’s problems.

I’ve had a change of mind. I’m open minded enough to read up on something at least superficially, before deciding it is not far up my priority list.
This is my initial impressions of the state of DVCS – errors and omissions are probable at this stage.

I am going to spend a fair amount of time reading and ultimately using Git, and these are the reasons why I think it is superior to centralised source control, you should consider it too:

  1. No need to be continually connected to the Source Control Repository. All the source is stored locally on your machine, and you can commit your changes to the staging area whenever you like, and push the changes to other remote servers when you are connected/ready
  2. I understand DVCS allow operations such as moving and renaming files which other products often found challenging. The key to this seems to be the fact that the changes between revisions are tracked.
  3. Online collaborative sites such as GitHub. Secure hosting for your source, with loads of workflow features.
  4. Ease of Branching/Merging. I’ve experience of the excellent Merging and Branching support in Perforce. I look forward to seeing how it compares to DVCS.
  5. Price and Performance. Free and Fast – what a combination.

Useful resources

Professional Git Online Book
Joel Spolsky’s Introduction to DVCS and Mercurial
GitHub

Digg This
Reddit This
Stumble Now!
Buzz This
Vote on DZone
Share on Facebook
Bookmark this on Delicious
Kick It on DotNetKicks.com
Shout it
Share on LinkedIn
Bookmark this on Technorati
Post on Twitter
Google Buzz (aka. Google Reader)
Posted in Uncategorized | Tagged , , | Comments Off on How I Learned to Stop Worrying and Love the DVCS

Compiling Objective-C on Windows via Command Line

I’ve been tinkering with Objective-C for a few weeks on my Mac laptop. I wanted to find out if it is possible to compile on Windows for when I don’t have access to a Mac. I just wanted to be able to write basic Console type programs, but have access to the basic Cocoa objects. As this was non-trivial, and some of the software is not particularly mainstream, I am documenting here how I went about doing this.

Firstly install cygwin.

Next, install GCC via the cygwin setup package. GCC supports compiling Objective-C out of the box, but we need an implementation of the Cocoa libraries to give us access to objects such as NSObject, NSArray etc.

We get this by installing the GNUStep libraries. You need both the Core and System packages, currently as version 0.23.0/1

To compile an Objective-C program into an executable, we run the following two commands :

gcc -c program.m file1.m -I c:/GNUStep/GNUStep/System/Library/Headers -L c:/GNUStep/GNUStep/System/Library/Libraries -fconstant-string-class=NSConstantString -enable-auto-import

gcc -o test.exe program.o file1.o -lobjc -lgnustep-base -enable-auto-import -I c:/GNUStep/GNUStep/System/Library/Headers -L c:/GNUStep/GNUStep/System/Library/Libraries

Hope that helps someone.

Digg This
Reddit This
Stumble Now!
Buzz This
Vote on DZone
Share on Facebook
Bookmark this on Delicious
Kick It on DotNetKicks.com
Shout it
Share on LinkedIn
Bookmark this on Technorati
Post on Twitter
Google Buzz (aka. Google Reader)
Posted in Uncategorized | Tagged | Comments Off on Compiling Objective-C on Windows via Command Line

What to Log?

People keep coming up to me at Christmas parties and asking me what I log in my applications.

And I tell them:

  • All unhandled exceptions, via an Application-wide exception handler
  • Performance data for certain critical areas of the app
  • In certain areas of the application, we suspect aren’t being used much
  • At application start-up so we can see how often the app is being used

And in development – debug statements can be used to “print” the state of objects – often saves getting slowed down by the debugger.

You could also add log statements for code paths you suspect are not being used in Production.

Now I can just tell these partygoers to subscribe to my RSS feed, and get back to enjoying another snowball, or a glass of Baileys.

Digg This
Reddit This
Stumble Now!
Buzz This
Vote on DZone
Share on Facebook
Bookmark this on Delicious
Kick It on DotNetKicks.com
Shout it
Share on LinkedIn
Bookmark this on Technorati
Post on Twitter
Google Buzz (aka. Google Reader)
Posted in Uncategorized | Comments Off on What to Log?

Asynchronous Ado.Net Log4Net Appender

As I mentioned in my last post, the out of the box AdoNetAppender in Log4Net is a synchronous appender. This may be fine for many scenarios, but in cases where you have some latency between your application and your database server, you typically want to minimise interaction with the database, and a large amount of logging may cripple the perofrmance of your application.

I thought that it may be simple enough to create your own Log4Net appender, and sure enough, it turned out to be easy enough.

I decompiled the source code for the AdoNetAppender via reflector and set to work for a couple of hours a few false starts.

The configuration for the async version is practically identical to AdoNetAppender, we just need to reference the type from my assembly, instead of log4net.

I had to modify the code substantially to make it thread-safe as the IDbConnection and IDbCommand objects were fields that were overwritten for each log message leading to BeginTransaction being called on the same IDbConnection which throws an exception. Now – each log message receives it’s own connection and command object.

Interestingly, it seems to run slower than the synchronous ADO Appender on a local Sql Server. I think this is to be expected due to the context switching of all the threads when run in a tight loop. However, the logging does not block the main thread anymore, so this meets my requirements of logging not impacting the performance of the  production code.

Source code is available for download under BSD license as well as Console and Winforms Test apps.

I have not tested this software extensively. I have written it to scratch an itch, not for production purposes, so please test extensively before relying on it.

Any feedback or bug reports welcome.

Digg This
Reddit This
Stumble Now!
Buzz This
Vote on DZone
Share on Facebook
Bookmark this on Delicious
Kick It on DotNetKicks.com
Shout it
Share on LinkedIn
Bookmark this on Technorati
Post on Twitter
Google Buzz (aka. Google Reader)
Posted in log4net, logging | Comments Off on Asynchronous Ado.Net Log4Net Appender

Centralised Application logging via Log4Net

Logging is now an essential part of any serious application. It is crucial to get unhandled exceptions from production and development environments logged. This removes the necessity for users to provide error messages, and allows the developers to have access to full stack traces which often reveals sources of defects immediately.

Log4Net offers many different ways of logging. There are many different appenders, but it is possible to get centralised logging from different clients and applications.

Our requirements are that logging messages should have guaranteed delivery, and not impact application performance. We need to receive messages on a single machine, and be able to view those effectively.

Therefore, we cannot use the UdpAppender as these message would not be guaranteed to arrive. The AdoNetAppender works synchronously, and we found it had a material impact on our app’s performance.

The solution we settled on is a bit more convoluted than I would have liked, but has been running for months now, and works really well.

We use the RemoteSysLogAppender to forward messages to a server that is running Kiwi Syslog daemon which is a fairly inexpensive product.

The configuration for this appender is reasonably simple:

<appender name="RemoteSyslogAppender" type="log4net.Appender.RemoteSyslogAppender">
      <layout type="log4net.Layout.PatternLayout" value="%date{dd/MM/yyyy hh:mm:ss,fff} | %thread | %level | %logger | %username | %P{log4net:HostName} | dev | %message | %exception | "/>
      <remoteAddress value="LOGSERVER" />
      <filter type="log4net.Filter.LevelRangeFilter">
        <levelMin value="ALL" />
      </filter>
    </appender>

We then configure Kiwi to forward it’s messages to a database.

Finally we use another Log4Net Dashboard, another inexpensive product to view the messages that have been received. Kiwi is configured to match the schema Log4Net Dashboard expects.

Our NAnt build scripts deal with the different configurations for development, QA, and production logging.

I am sure that there are alternative ways to achieve this same goal along, but this was a low cost and easy way to achieve robust logging so we have confidence that we can view the health of our applications.

Digg This
Reddit This
Stumble Now!
Buzz This
Vote on DZone
Share on Facebook
Bookmark this on Delicious
Kick It on DotNetKicks.com
Shout it
Share on LinkedIn
Bookmark this on Technorati
Post on Twitter
Google Buzz (aka. Google Reader)
Posted in log4net, logging | 1 Comment

Everything I know about building a responsive Windows Forms application

The more I read about multi-threading, the less I feel I understand. Perhaps a good rule is don’t write multi-threaded code, similar to the first law of Distributed Objects. Fortunately, all I am going to talk about in this article is my experiences of making a responsive Windows Forms UI, using the BackgroundWorker and other techniques.

Full documentation on the BackgroundWorker can be found here, but the basic usage of the BackgroundWorker is to sign up to the DoWork eventhandler with the long-running task, and call the RunWorkerAsync method to kick the task off.

A naive example:

private void btn_Click(object sender, EventArgs e)
{
   BackgroundWorker backgroundWorker = new BackgroundWorker();
   backgroundWorker.DoWork+=delegate
   {
      for(int i = 0 ; i < 10; i++)
      {
          label1.Text = DateTime.Now.ToString();
          Thread.Sleep(1000);
       }
    };
    backgroundWorker.RunWorkerAsync();
}

This will not work - we will get an InvalidOperationException at runtime - "Cross-thread operation not valid: Control 'label1' accessed from a thread other than the thread it was created on.". The issue being that attempting to write to the control label1 is not allowed due to the "legacy" issues that Windows leaves us. Incidentally, we get the same issue in WPF.

We need to make sure that we do not call back to the UI thread, except on the event handlers that Microsoft has kindly made for us, in this case ProgressChanged and RunWorkerCompleted.

A non-crashing example:

private void btn_Click(object sender, EventArgs e)
{
     BackgroundWorker backgroundWorker = new BackgroundWorker();
     backgroundWorker.DoWork += delegate
    {
       for(int i = 0; i < 10 ; i++)
       {
            backgroundWorker.ReportProgress(i, DateTime.Now.ToString());
            Thread.Sleep(1000);
       }
    };
    backgroundWorker.WorkerReportsProgress = true;
    backgroundWorker.ProgressChanged += delegate(object s, ProgressChangedEventArgs args) 
     { 
        label1.Text = args.UserState.ToString(); 
     };

     backgroundWorker.RunWorkerCompleted += delegate
     {
         label1.Text = "Completed";
     };
     backgroundWorker.RunWorkerAsync();
   }

If you can't guarantee that your code that invokes a BackgroundWorker will be on the UI thread, you will need to test this with InvokeRequired which is a property on the base class Control, and if necessary call the method Invoke, which calls your UI update code on the main UI thread. Invoke is a fairly expensive operation, so I don't recommend just calling Invoke regardless of whether it is needed. An example of this:

backgroundWorker.ProgressChanged +=
delegate(object s, ProgressChangedEventArgs args)
{
     if(InvokeRequired)
     {
         Invoke(new Action<object>(UpdateLabel), args.UserState);
     }
     else
     {
         UpdateLabel(args.UserState);
     }
};

Using these handful of techniques, you should be able to build rock-solid, responsive User Interfaces, until you need to share data between threads that is, but that is a different issue alltogether. Are there any other techniques you use for responsiveness applications with Windows Forms?

Digg This
Reddit This
Stumble Now!
Buzz This
Vote on DZone
Share on Facebook
Bookmark this on Delicious
Kick It on DotNetKicks.com
Shout it
Share on LinkedIn
Bookmark this on Technorati
Post on Twitter
Google Buzz (aka. Google Reader)
Posted in UI, Windows Forms | Comments Off on Everything I know about building a responsive Windows Forms application