Welcome to AspAdvice Sign in | Join | Help

Orcs Goblins and .NET

I enjoy reading and writing. I hope you enjoy at least the former. I have moved my blog to Brendan.Enrick.com.

Syndication

Tags

News

Locations of visitors to this page

Navigation

Archives

Advice Sites

Articles

Blogs

Music

Moving My Blog

Attention readers my blog is moving. You can now find my posts at http://brendan.enrick.com/

Please visit Brendan Enrick's Blog at its new location. The main changes you as the reader will see are just a new look on the blog. I plan on using more screenshots since the new blog will make it easier for me to post images. Please keep reading. I will continue writing.

My Blog has moved!

Sponsor

by Brendan | 0 Comments

My Planned Azure Stuff

I've got some code I need to write for the company I work for which will will be putting on the cloud. Since I started working with Microsoft's cloud solution a couple of months ago, I've noticed one problem that is probably very common with early releases of technology. The samples they provide just don't cut it. Some of them are pretty good, and I've referenced them when trying to work with things. It really takes the community's involvement to really get a good bunch of articles and samples together.

Now that they've released some of this stuff, I want to try to get some simple little examples together to get people started with Windows Azure. When new stuff comes out I usually like to start with some short little programs which use the new technology.

Hello world programs seem to spring up everywhere. Those are nice, but I like to in some way use something special about the new technology. For example with WPF and Silverlight you can obviously start with a Hello World program, but you might then quickly move onto one which has some kind of spinning object or some kind of crazy animation.

With Windows Azure I plan to start with a simple example with a program that uses multiple instances of FrontEnd and Worker Processes that can work together reasonably well. This should help show how Windows Azure can scale applications. Some considerations must be made when working with the cloud. I've started one of my samples, but I haven't gotten the chance to finish it yet. When I do I will make sure to post here about it.

Sponsor

by Brendan | 0 Comments

Filed under: ,

Default Azure Storage Information

The cloud is nice and useful, but there is one feature which is vitally important to most applications in my opinion. The storage which is used in Azure pretty much will need to be used by every application. One of the reasons for this is because there is a little bit of management of processes which needs to be done. This I've noticed requires a combination of the Queues and the Tables.

Queues are great because roles are able to queue up work which needs to be done. This helps make sure that all required tasks can be completed. Without these communication between the processes could get a bit messy. Also the information being worked with needs to be stored in a shared location which all of the roles and their instances have access to. The storage makes this work.

For these apps you don't use the standard configuration files to store your config settings. There is a file in the Cloud project called ServiceConfiguration.csfg, and this file is where the ConfigurationSection is located. This is where you define the settings for connecting to your storage when it is in the cloud. You will also have your configuration information stored in your web.config, and this will make things nice and easy. Since there are methods to get your configuration information which know to check the correct location.

For now these connections are made using http, so these "endpoints" are simply URLs. For local work the dev fabric and the storage are local. Because of this everyone has the same default settings for their storage. A lot of people are probably going to be looking for these default settings so to make it easy to find, here they are.

These are the AppSettings ones in the normal configuration file.

<appSettings>
  <add key="AccountName" value="devstoreaccount1"/>
  <add key="AccountKey" value="Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw=="/>
  <add key="BlobStorageEndpoint" value="http://localhost:10000"/>
  <add key="QueueStorageEndpoint" value="http://localhost:10001"/>
  <add key="TableStorageEndpoint" value="http://localhost:10002"/>
</appSettings>

These are the ones in the ServiceConfiguration.csfg file.

<ConfigurationSettings>
  <Setting name="BlobStorageEndpoint" value="http://127.0.0.1:10000/" />
  <Setting name="QueueStorageEndpoint" value="http://127.0.0.1:10001/" />
  <Setting name="TableStorageEndpoint" value="http://127.0.0.1:10002/" />
  <Setting name="AccountName" value="devstoreaccount1"/>
  <Setting name="AccountSharedKey" value="Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw=="/>
</ConfigurationSettings>

If you need access in both a WebRole and a WorkerRole this config section can be added to each one.

Also because you have added this to that configuration file you will need to add the definition of these to the ServiceDefinition.csdef file in the following way.

<ConfigurationSettings>
  <Setting name="BlobStorageEndpoint"/>
  <Setting name="QueueStorageEndpoint"/>
  <Setting name="TableStorageEndpoint"/>
  <Setting name="AccountName"/>
  <Setting name="AccountSharedKey"/>
</ConfigurationSettings>

If you don't have this file matching the configuration you will get this nasty error message which doesn't tell you much. Thanks for telling me "Invalid configuration file." It doesn't say what file or what is wrong or anything.

Invalid configuration file

When you need to access Azure storage you will be using some of this information. Some helper methods have been created which access these values from your configuration. These will help you get StorageAccountInfo classes. You can use them in this way.

StorageAccountInfo queueAccount = StorageAccountInfo.GetDefaultQueueStorageAccountFromConfiguration();
StorageAccountInfo blobAccount = StorageAccountInfo.GetDefaultBlobStorageAccountFromConfiguration();
StorageAccountInfo tableAccount = StorageAccountInfo.GetDefaultTableStorageAccountFromConfiguration();

UPDATE: I forgot to mention when I first posted this that the Storage classes for this early release of Windows Azure are kept in one of the sample projects. It is the StorageClient sample project and the namespace needed is Microsoft.Samples.ServiceHosting.StorageClient. This will change once they've gotten some more substantial libraries set up for accessing Azure Storage.

This keeps things nice and clean, and makes connecting a lot easier.

Once you have these AccountInfo objects you can use them in the constructors for your service classes which give you access to the storage. You can access that storage using the following code.

QueueStorage queueService = QueueStorage.Create(queueAccount);
BlobStorage blobService = BlobStorage.Create(blobAccount);
 
TableStorage tableService = TableStorage.Create(tableAccount);
TableStorage tableService = TableStorage.CreateTablesFromModel(typeof(MyClass), tableAccount);

Please note that the table storage is slightly different from the other two. They are much easier since they don't vary in structure. I recommend you create that service based on a specific model class you have already defined. It should inherit from the TableStorageEntity class.

Now you're ready to start working with Azure storage.

Sponsor

by Brendan | 0 Comments

Filed under: , ,

Integration, Integration, Integration.... Developers

Earlier this morning we saw some very interesting Keynote talks here. A preview of Windows 7 was shown. We're finally seeing some really great direction from Microsoft. I admit I am very much submerged in all that is Microsoft considering my occupation, but I have plenty of times in the past been one of the people to attack the mistakes of Microsoft. I think we've finally reached the stage in Internet integration that Microsoft was worried it would not be part of. Plenty of people say in the past that Microsoft got into the browser market because that is where everything was moving. Because of this they needed to be involved in this web-based revolution or they would miss everything.

In the past people would tell me that eventually it wouldn't matter what OS I ran, it would only matter what browser I used. This hasn't exactly happened, but it is moving that way slowly. I still think that operating systems are still EXTREMELY important. There certainly wouldn't be this PC versus Mac debate still raging if operating systems didn't matter. It is not all about the hardware here. PCs and Macs are not very different when it comes to hardware. They've got a few differences, but not much. The big difference is in the OS even today. Browser may work cross platform, but I am sure that web developers know that even looking at Firefox there are differences between Firefox on Windows and Firefox on Apple.

At the keynote Microsoft was very much pushing the integration they want to make between all devices. Client applications, web applications, and mobile applications all working together at the same time working with synchronized information. Allowing someone to leave one location continue working while in transit and continue that work once again at a new location on a new machine. This has been somewhat possible for a while, but not integrated well enough for it to be easy for people to do this regularly.

Windows Live Mesh was pushing some of this, but seemed not impressive enough to me as well as others. Now with the addition of Windows Azure there is a lot more which can be done. The advances made in Windows 7 as well as the new versions of Office coming can really make some differences.

I am certain I am not the only one who noticed the similarities between the web-based version of Microsoft Office and Google apps. I would say the big difference here is the integration they've made with the client versions of Office. Being the company which makes Office allows them to leverage the support needed to integrate so well that simultaneous work can be done on individual documents.

Developers, Developers, Developers.... I mean integration. If you haven't checked out the video of the keynote this morning I highly recommend watching it. You will get to see some great demonstrations of a lot of the new technology coming out of Redmond.  I don't know if or where the video might be, but I will post it here if I find out. Since I was there I haven't bothered looking for the video.

Sponsor

by Brendan | 0 Comments

Filed under:

Optional and Named Parameters in C#.... Finally

I've used Python a decent amount in the past. I enjoy the language and I've taught the language to non-computer science students at Kent State University. There are plenty of things that I really love about that language. I could write plenty of blog posts regarding how totally awesome python is. I tend to use it for quick little bits of code. I don't use too much Python for production code, but I do like a lot of the language features.

One of the big pieces of Python I enjoy is the ability to define a default value for a parameter in a method. This allows for optional parameters. Wonderful stuff. I hate having to make extra overloads to handle default parameters which pass in either the right ones or values specifying that the value is missing. Way too much handling just to handle this case.

Named parameters are also a lot of fun since you're able to specify which parameter you're passing. It works pretty well in Python. I don't know how useful it will be in C#, but it is a neat feature which doesn't add much complexity to the language.

In the session I am sitting in at the PDC I am listening to a speaker talking about how this is coming down the pipe for C#. I would guess C# 4.0 will include these features. Now I can finally get rid of all of the extra overloads. I can finally just have my default parameters. No more of the extra overloads or passing in missing values.

For a lot of code we want to allow for customization, but at the same time we want to be able to have default values since the 90% case doesn't require the customization. We need the ability to customize some, so these features in combination could allow a method with say 8 parameters have you specify 2 of them and then name the 3rd one you want to specify. This should help clean up code and allow us to be concerned only with the code which is currently important to us. No need to worry about all the extra parameters in the method that we don't care about.

Finally!!! I've been wanting default and named parameters for a looooong time.

Sponsor

by Brendan | 1 Comments

Filed under: ,

Dynamically Typed Objects in C#

In the past the C# language obtained a new feature allowing it to declare variables as type "var". It didn't really do any dynamic typing though. For the most part the var is just there for developer convenience. Some people have complained about this in the past. Stating that C# 3.0 doesn't have dynamic types. The problem is that with that it is figuring out when compiling what type the variable is.

I am currently sitting in the Los Angeles Convention Center for the PDC 2008. I am watching a session on the future of C#. One of the interesting demos we're seeing in this is some of the new dynamic typing that is coming down the pipe. Variables are being defined with the type "dynamic" this is very cool because it is being figured out at runtime and not at compile time. It seems to be a very powerful ability of the language even in this early stage. Perhaps we'll see some new stuff from this some time soon.

Looks like they're combining the dynamic and var so they work very well together.

Sponsor

by Brendan | 0 Comments

Filed under: ,

Microsoft Announced Windows Azure at PDC

In the recent Keynote here at Microsoft PDC, they just announced what they're calling an operating system for the cloud. Windows Azure is a platform for developers to create applications which are highly scalable. Over a month ago I was at Microsoft's campus looking at their Red Dog project. Now we've all learned that Microsoft has named this project more officially as Windows Azure. With this new project they are launching their CTP solution. With this they have a live cloud on which people can publish their cloud-based applications.

For now hosting applications on the cloud will be free of charge for people. Once Microsoft moves further along with their cloud hosting abilities this will be a pay service. They've not said yet how much these services will cost or even really what you'll be charged for. It looks like there will be a lot of new Azure stuff coming out soon.

One of the neatest applications I've seen here is one that I saw a preview of when it was in its early development in Redmond. Bluehoo is a neat application. It is targeted to mobile users. It allows them to have their phones communicate with eachother and with central services hosted on the cloud. This allows all of the users to know who is around them and what those people's interests are based on profiles the users specify. There is so much potential for this application I can't wait to see what they do with it.

It would be great to be able to walk into an establishment of some kind and see that there are people sharing your common interests. You'll be able to start conversations with people based on this. According to the Bluehoo guys the application will be available as a beta service online later today. Go check it out.

Sponsor

by Brendan | 0 Comments

Filed under: ,

Throwing Away Return Values

So it probably took me 15 seconds to track down this error in my code, and those wasted seconds, while very minor, are exactly why I wish Visual Studio or ReSharper or some program would alert me to what I've mistakenly done. It is easy with some methods to throw away a return value without noticing.

If I am calling a method which has a return value I am ignoring, it might be nice if it made it a little more obvious I am doing so. Perhaps an underline with a note explaining that I am not processing a return value. Sure lots of people ignore return values all the time, but they really shouldn't be. The return value probably exists for a reason.

As an example assume you want to get a substring.

string myString = "LongStringWithTooMuchStuffSoItProbablyNeedsToBeCutDownToSize";
myString.Substring(5,10);
Console.WriteLine(myString);

Looking at that code it is pretty easy to miss the fact that I am not doing anything with the return value. Right now the value written to the console is "LongStringWithTooMuchStuffSoItProbablyNeedsToBeCutDownToSize". I am just ignoring the value that substring is returning. This means I am not actually doing anything. I am just printing out the original string. What I really should have written is this.

string myString = "LongStringWithTooMuchStuffSoItProbablyNeedsToBeCutDownToSize";
myString = myString.Substring(5,10);
Console.WriteLine(myString);

It wouldn't be that difficult for a program to detect unused return values. I am already informed when I don't use variables. Since plenty of plug-ins for visual studio already read method signatures, one of them could easily detect that I am ignoring the return value of that method. Ah the little things. Not too big a deal, but kind of annoying since I executed my program once and had to look to see why my code didn't work as expected.

Maybe some program does mention this already, or maybe there is some setting I need to enable to do this. Please let me know if you know of one.

Sponsor

by Brendan | 0 Comments

Filed under: ,

Using Fiddler with Mozilla Firefox

Earlier today I read a very interesting blog post from Steve Smith. I use FireBug for all of my debugging purposes in Firefox. I wish that IE had a tool as amazing as FireBug. Sadly, I know of none. I had a similar desire for Firefox, because I use Fiddler with IE. That I knew of it wasn't possible to use Fiddler with Firefox. That always annoyed me, because Fiddler is a powerful tool for analyzing HTTP traffic.

It seems Steve was under the same impression as I was. Luckily someone pointed out to him that Firefox can be manually configured to work with Fiddler. Check out Steve's blog post where he details how to configure Firefox to use Fiddler. Steve has some screen shots and details the steps pretty well.

Thank Steve for posting this!
Thank Ivo Evtimov for telling Steve about it!

Sponsor

by Brendan | 0 Comments

ASP.NET MVC Beta Released

Just in case you missed the blog post from Scott Guthrie yesterday, I will post this.

Scott Guthrie's MVC Beta Announcement

ASP.NET MVC Beta Download

One of my favorite little added bonuses of using IDEs is that I get a lot of helpers that generate code and files for me. If I call a method that doesn't exist I am able to have a stub of the method automatically generated. In the ASP.NET MVC beta they've added a new menu for adding views.

The Add View window it brings looks extremely useful. I look forward to using that to create my views. It handles strongly-typed views and MasterPage selection.

Overall it looks like a nice release.

For those of you who want to deploy using ASP.NET MVC keep in mind that Scott says this in his post.

Today's ASP.NET MVC Beta release comes with an explicit "go-live" license that allows you to deploy it in production environments.  The previous preview releases also allowed go-live deployments, but did so by not denying permission to deploy as opposed to explicitly granting it (which was a common source of confusion).  Today's release is clearer about this in the license.

Good day and enjoy ASP.NET MVC's newest release.

Sponsor

by Brendan | 0 Comments

Defining Progress in Software Development

One thing I probably say too often when working on code with other people is, "That's a new error. We're making progress." I take a bit of crap from a coworker of mine because of this. He thinks it is quite funny that new error messages are what I consider to be progress. After tracking down a bug creating some exception, it is nice to get a different message. This is of course as long as the newly written code didn't create the new error message. The idea when I say this is that I have removed a roadblock and gotten to the next one.

Perhaps we can make some analogy here about a hurdle runner jumping over a hurdle only to get to another one. Someone might say that it is bad that he is at another hurdle, but I prefer to say that it is great because he has made progress and is closer to reaching the finish line. Maybe he has one hurdle left, maybe twenty. It doesn't matter really as long as he is closer to the goal.

So leave me alone about the new error message thing being progress..... Ed.

Sponsor

by Brendan | 2 Comments

Filed under:

A Note on ASP.NET Session

A while back, I was asked a question by one of the junior developers here at Lake Quincy Media. I had him working on a little feature on an ASP.NET site which could be easily handled using Session. "Where is session stored, on the server or the client?" he asked. As a quick response I said that it's stored on the server. He followed that by saying, "So it doesn't use a cookie?"

And that was all it took to set me on one of my usual lengthy explanations of how a piece of functionality actually works.

How is ASP.NET Session stored?

In ASP.NET, session data is stored on the server. It is stored in a per-user basis and maintained only for a limited period of time. Anyone who has used sessions knows that it is accessed during server-side code execution and is accessed using strings as keys. So we know that Session data is stored as key-value pairs. The following is basically how we access the session information. In this code I store a string in Session and pull it back out from session.

string name = "Brendan Enrick";
Session["fullName"] = name;
string nameFromSession = Session["fullName"] as string;

Pretty simple really. It is a nice way to store data for a user during this visit to the site.

How are users associated with their session?

In the default scenario for ASP.NET Session, cookies are used. No, the data is not stored in the cookie. When a Session is started on the site, the user is given a cookie (yummy cookie) which contains a unique identifier for accessing the session. While the user navigates around the site, the browser sends the cookie information to the server. This is what keeps the user associated with his session. Each time the server receives the cookie from the user so when your code requests data from session, it is pulling the data from the session associated with that user.

Can cookies be avoided?

Yes, it is possible to avoid using cookies. Session is able to run without cookies by setting a boolean property in the web.config file to true. The property is called cookieless and it is on sessionState. When session is not using cookies it instead places the SessionId in the url the user requests. Internal links on the site contain the unique session id so that the processing which occurs on the server is able to access the correct session values. I prefer not using this method because it just makes URLs all over the site look terrible.

How long are values saved in session?

By default the timeout for session in ASP.NET is 20 minutes. This means that the session will expire 20 minutes after the user last accessed the site, so as long as the user continues using the site it will not expire. If the user waits 25 minutes before clicking a link, the session will have expired. It is easy to adjust the length of time before sessions expire. In the web configuration file you can simply adjust the value for timeout on the sessionState element.

<sessionState cookieless="false" timeout="45" />

And this is why asking me questions can be a bit crazy. I'll first go on a lecture about the topic and then eventually I'll publish a blog post so I don't have to lecture again. Now if someone asks me that question I'll send him here.

Sponsor

by Brendan | 1 Comments

Filed under: ,

Separation of Concerns

Continuing on my previous post about separation of code responsibilities where I was mostly just discussing one aspect of a book I appreciate, I now want to comment on the concept of separation of concerns. While thinking about this idea I realized that I've been a follower of this idea longer than I previously thought. I admit it, I don't separate my code as well as I often could.  I am sure that everyone reading this has heard that we should try to make things modular and that we should encapsulate pieces of our code. It is all part of what I consider to be classical education taught to programmers everywhere.

Separation helps for many reasons. It allows us to think about only the currently important section of code. We need not always be concerned with how everything else works. It is much easier to solve problems by implementing a solution that doesn't require at every step being concerned with the details. If I want to tell someone how to load boxes into a truck. I want to say something like this.

  1. Pick up a box
  2. Place the box in the bed of the truck
  3. Repeat previous steps until no boxes remain outside the truck

I don't want to have to do something like this.

  1. Pick up a box
    1. Move next to a box
    2. Bend knees so you're at about the same level as the box
    3. Put hands on the box
    4. Grip the box tightly
    5. Lift up using your knees
  2. Place the box in the bed of the truck
    1. Move while still gripping the box
    2. Walk over to the back of the truck
    3. Move box away from body and toward the truck
    4. Let bottom of box touch the truck bed
    5. Release grip on the box
    6. Move hands away from the box
  3. Repeat previous steps until no boxes remain outside the truck

I prefer to be able to assume that the person I am talking to knows the simple things. If I have to get into this kind of detail anytime I tell someone how to do something I am going to forget details along the way. I'm going to say something incorrectly. I also may mess up the overall algorithm because I am too concerned with the details of the steps. With different pieces being separated it makes it much easier to solve problems. There are plenty of other reasons to separate what we create.

SeparationOfConcernsAnother reason that it is nice to have our work separate out nicely is that we are able to replace and adjust sections of our work without having to rip it all apart. If I have been having sections of code performing plenty of different tasks then I make it a lot harder to change it. When I go to make a change I am dealing with a lot more than I need to be, because a lot more stuff is there. The benefits of this are plenty and I am sure people can suggest plenty more. (I am also sure there are plenty of reasons to not have modular code.)

As I am sure is the case for many developers reading this, I spent a lot of time working with Linux machines. One thing I of course noticed with the classic applications found in the open source world is the tendency to have individual programs perform one task. The output of one program is often passed as input into the next program.

One line I heard often is, "Why does program xyz need to sort its output? Just send it through program abc, because it sorts already." At first I think the idea of having a program for sorting is kind of silly. Isn't it easy to sort? Couldn't all of these programs just sort? Well even sorting text can be fairly complicated. Which algorithm should you use? Should sorting be on the first character or the second? Maybe we want to sort based on the second column of data. I don't want to go into the code for a large number of programs just to update how it sorts.

Along with separation of concerns comes the also very important need to break dependencies. When we perform this separation we are breaking things into separate objects, libraries, and who knows what else. Some dependencies aren't a big deal. Most of my applications are fairly dependent on the .NET Framework, but I am not concerned with this. If I am stopping using the .NET Framework, I am probably switching languages or something and rewriting anyway.

While thinking about how to break up responsibilities among sections of code I've really been noticing how great an idea those separate programs have always been in the Linux world. I've noticed a trend away from it, but the core applications still tend to follow that old principle. Keep things small and performing only one task. Need something else? Let another program handle that task. As any code attempts to do more, it just becomes more difficult to work with.

The idealistic purpose of computers is to make our lives easier. Try not to let them do the opposite.

Sponsor

by Brendan | 1 Comments

Filed under:

Separation of Code Responsibilities

One book I would highly recommend reading is Working Effectively with Legacy Code.  I should write a review of this book at some point, but for now I just want to mention one great thing about it. Reading the book is very valuable, but it is an excellent reference book. Many of the chapters of the book are devoted to certain scenarios which might arise when dealing with legacy code. It then goes on to explain how to handle these situations.

At the moment I am attempting to change some code which needs to make a bunch of API calls. Right now the code is not neatly written and methods are directly interacting with the API, performing in-memory work, and calling methods to save data. Since I am relatively new to this stuff, I figured I'd read the chapter titled, "My Application Is All API Calls".

In the chapter, the author uses a simple example about a mailing list server. The application is a big jumbled up mess. As the text preempting the code states, "We're not even sure it works". This type of situation is exactly what I am trying to avoid, and a great way to know something works is to have tests written for it which demonstrate the code's ability to function correctly. The tests are also a great way to show what a piece of code does.

When the author is discussing how to design the application in a better way, he mentions his desire to separate the code's responsibilities. Now I know that separating code responsibilities is important, but his example really shows it well I think.

1. We need something that can receive each incoming message and feed it into our system.

2. We need something that can just send out a mail message.

3. We need something that can make new messages for each incoming message, based on our roster of list recipients.

4. We need something that sleeps most of the time but wakes up periodically to see if there is more mail.

The nice thing is it then becomes so much easier to work with the API calls because they've so nicely been separated. One mistake I've made in the past is not separating out the code which needs to periodically perform some other action. I have attempted in the past to link that together with the code performing the local work. BIG MISTAKE!

Right now what I am planning on writing is going to have these separate responsibilities:

  1. Something to fetch data from an external source.
  2. Something to manipulate the retrieved data into its desired format.
  3. Something to store the data internally.
  4. Something to periodically check for new data.

Notice the incredible similarity there. It is very nice for my application, because only one piece needs to know about the API, and only one piece needs to know how the data is stored. Instead of before where some of that was a bit muddled together. This separation keeps the code cleaner and much easier to code. Testing will also be a lot easier because of this separation.

One of the most difficult parts about testing code well, is that lots of code is not separated well enough to be tested. Being too tightly connected to an API makes code nearly impossible to test, so it is hard to tell if a piece of code is working or what it does.

Sponsor

by Brendan | 2 Comments

Filed under:

Dependency Injection Frameworks

Lately I've been taking a look at dependency injection, and I've spent a little bit of time examining Ninject. I've read most of the Ninject Documentation, and so far I'm very interested in it. I love that it doesn't depend on configuration files. The "modules" it uses look very interesting. Whenever possible I prefer to use strongly typed everything. I love the compiler and everything it can do for me. I also love intellisense, and not depending on config files helps me to use both of those.

I've heard a little bit about some other dependency injection frameworks including Windsor and StructureMap. Not really sure what people are using. I'm pretty well sold on Ninject for now, so if anyone has any opinions or information I need to consider please let me know. I would be glad to hear what other people are using and what their opinions are.

Dependency injection makes a lot of sense, but with a few different choices out there, it is hard to say which route someone should take. I also like the fact that ninject seems to be a lot lighter weight than some other frameworks. In general I prefer add-ins which have very small footprints. If there is one thing I can't stand it is code that dominates everything else.

Sponsor

by Brendan | 1 Comments

More Posts Next page »