Welcome to AspAdvice Sign in | Join | Help

I often wander around the house with my laptop and I always eventually reach the point where it's time to return to home base (the desk) and plug it back in to recharge the ol' battery. Usually, (and particularly at night) the easiest way to do this is to grab the power cord connector and poke repeatedly at the back of the laptop in the general vicinity of the power receptacle until it finds its way in. Tonight, however, I found that if my hand wanders a bit too far to the right, it can (and will) close the circuit agaist the metal VGA port housing, and short out the motherboard in rather spectacular fashion. A neat side effect of this technique is that I've discovered a way to bypass that pokey Windows shutdown process, and stop the machine cold...Instantly. So, after the shock wore off, with a bit of trepidation and a lot more care I tried once again, turned the laptop back on, and after a few tests everything *seems* to be in working order. No apparent harm done, save the fading remains of the unmistakable acrid scent of electrical fireworks.

Lessons learned:

  • Be more careful when plugging in the laptop.
  • Next time, buy the extended warranty with accidental damage coverage, just in case.

And you're going to be a PDC, you need to stop by the CodeSmith Tools booth and check it out.  CodeSmith is an awesome template driven code generation tool that helps you reduce the time that you have to spend writing the more mundane repetitious code that we all know and hate, and it can really change the way you develop, allowing you to focus more on the design and the importnat bits of code, and let the tool handle many of the more basic implementation details.  They're even going to be giving away several free copies there at the booth.  See Eric Smith's blog for more info.

Well, I've let you sweat for awhile after my post on taking ownership of an existing application, but before the criers hit the streets with the news that I'm an out-of-control rouge who mangles any code that he touches, let me clarify just a bit.

The point of the post was *not* to advocate tearing though your next project restructuring assemblies and changing naming conventions.  In fact, I'll say with certainty that it's an *extremely* rare situation in which this particular approach is appropriate.  I'll be willing to wager that nearly any existing application that you touch will either be being worked on by other developers, it may already be in production, or it may have been though a QA cycle.  Even if the existing code isn't great, in any of these senarios, wholesale or drastic architecture or code changes will cause more problems than they solve.  I guarantee it.  The point, if I may be so bold as to finally get to the point, is that if you're going to be working extensively on an existing application, it's worth taking the time to think about what you can do to best become intimate with it, to take an ownership in it, at least in the pieces of it that you'll be working with.  That way, when you *do* start to add new code, or modify existing code to meet the new requirements, it will fit with the existing application and strengthen it instead of fighting against it and making it brittle.  And that, my friends, is the point.


A question came up in a forum that I frequent, about web references in a web service client application.  The gist was that the poster had a web service deployed in both development and production, and needed the client app to also be able to switch between the two.

There are a couple of ways that I've thought of to handle this.

  1) Brute force. Create a web method for the development service, code to it, and before you deploy delete it and create a reference to the production service with the same name.
      I don't like this option at all.

  2) Create two web references, and write lots of ugly conditional code all the way through your app to deal with it. This is my *least* favorite option.

  3) Create a base class for your proxy classes, or an interface. (I like the interface better, because both the web service itself and the proxy classes could implement it.) Lots of work either way.

  4) I *think* (but I haven't tried it) that you *should* be able to switch the URL in the proxy class, say, based on a configuration setting. If this would work, it'd be by far the easiest and cleanest method.
      If, however, you ever regenerated the proxy class, this would fail silently, and you might not know the difference until it was too late, which is an edge that the interface might have.

Any thoughts? Any obvious (or not so obvious) methods that I've missed?  What are people doing in the real world in these situations?


As I posted awhile back, I've been working occasionally on modifying the ASP.Net IssueTracker Starter Kit, and I thought that I'd blog about about how I start on a project like this.

The first thing that I did, was to load it up in Visual Studio, and commit the whole thing to source control.  Now I had a point that I could get back to, regardless of how badly I screwed anything up.  Then, I set out to get an idea about the structure, not by looking at code, but by poking around and looking at how the projects fit together, what referenced what, and getting an overview of what files where where and so forth.  Once I had a good idea of how the application fit together, I tore it apart.  I added projects, renamed the assemblies and changed the entire namespace structure to fit with my standard naming conventions.  I moved code from one assembly to another and in the end, just generally caused as much choas as I felt the compiler could handle without expensive psychiatric counseling.  I attempted to build the solution, and watched bemusedly as the compiler spat out error after error, right back at me.  Then, one by one, I fixed every one of the errors, which required me to touch nearly every file in the entire solution.  For the next step, I set my sights on the database.  Every object in the IssueTracker database is created by a script called, who would have guessed, CreateDatabaseObjects.sql.  It was a little over three thousand lines long when I started and it had been generated by SQL Server's Enterprise Manager, by the look of it.  The formatting was iffy, (as SQL Server generated scripts tend to be) and so I sat down with that sucker, and re-formatted the entire thing.  I reworked every single line, from first to last, so that it was uniform and conformed to the formatting standards that I use when I write SQL by hand.

"Why?" you ask?  "Why Xander, would you spend hours rearranging code, and re-formatting SQL statements?  Does it really make the application better?  Does it make enough of a difference in the performance, or the code to be worth it?"

The answer is no.  The answer is no, because you'd be asking the wrong questions, and making the wrong presumptions about my goals in this exercise.  Did I make the application better?  Yes, I think that I did, at least a bit.  For instance, I think that the logical tiers are better seperated now that I've teased apart the UI and business layers.  Not better enough to be worth the time that I spent, however.  What was really invaluable to me, was having to touch every last piece of the application.  Having to recreate connections between the classes that had been broken, having to reformat the DDL for every column in every table, having to look at and toy with every single stored procedure...This is where the value was.  I can *always* get a concept of the application by poking around.  Looking at code, picking through tables and sprocs with Enterprise Manager...These things will teach me bits about the app.  I can't *know* it, however, until I've worked with it, and worked with it all.  Touching every file, every single database object, these are the things that I do that make this application mine, to start me down the path of knowledge that I need in order to work with this application and mold it into what I want it to be.  These are the things that I do in order to establish an ownership, so that the code that I write won't stand out like a slapped-on patch, but will fit with the structure and the vision of the application as if it were always there. This is a prerequisite for me, because only if I have this sense of ownership can I feel that I'm writing code that truly integrates.

By being added onto Darren Neimke's Blogroll. I guess that means that I'm actually going to have to start posting some quality content now and again. Dang it all. :-)

Several years ago, I read this article about the engineers who write and test the systems that run the space shuttle.

For some reason, I was thinking about it this morning, and I decided to dig up the link and post it.  It's a facinating contrast to the world of commercial application development where all-night coding sessions are fueled by Red Bull and last minute feature additions are often the norm.  I love getting to see different perspectives, and this article was a really cool chance to look at how some of the most stable (and expensive) code in the world is written.


Chances are, it’s white.  After all, that’s the standard default setting for every browser that I can recall having used.  Why does it matter?  Should it matter?  Well, I’d guess that 99.9% of the time, it doesn’t matter.  Because of that remaining one-tenth, however, it *should* matter to you if you’re a web developer.

The issue is this.  The background color plays a big role in the color scheme of most websites.  There are all kinds of elements tied into it.  Images backgrounds, tables, all sorts of layout and content elements depend on the background of the page to present a clean unified look to the visitor.  So, why is it that so many developers leave this up to chance?  That is, why, when they have a site that is keyed off of a white background, do they leave the background color to be handled by the browser instead of declaratively specifying what it should be?  Often, I think, because they never even see the flaw.  The default background color on their browsers is set to white, and it never even occurs to them that their site is going to look radically different to that small group of people who have customized their browser settings, with potentially ugly results.  This isn’t only an issue for the little guys…Change the default background color for your browser to something non-white and have a look, for example at this:  http://www.msnbc.msn.com

Personally, the background color for my browser is set to a pale yellow, but it really doesn’t matter.  I’d suggest that if you’re a web developer, you need to pick a default background color that works for you.  Any color will do…As long as it isn’t white.


Rob Howard has announced the release of Community Server v1.0.  It's a very cool app for blogs, forums, photo galleries and more.  If you haven't seen it yet, check it out.

The really cool thing, is that it's going to be an open source app.  Rob's said that the source should be available sometime next week.


Or at least that's what they're saying. http://www.schneier.com/blog/archives/2005/02/sha1_broken.html

Thank goodness that the .Net framework has CryptoServiceProviders for SHA-256 and SHA-512 as well as SHA-1. Personally, I don't have any applications that I'm currently maintaining that have critically sensitive data, but I'll probaby start migrating my hashing utilities from SHA-1 to one of those providers anyway, just because I'm particular about these things.


I’ve recently re-paved my desktop, and one of the things that I’ve wanted to do was to install a virtual machine system. Before being re-built that machine had all sorts of issues, not the least of which was instability caused by my incessant penchant for installing and uninstalling beta software, community tech preview software, trial versions of software, and software that I just want to check out. The net effect of all of those system changes led to a system that was so unstable that it was, for all practical purposes, unusable. I knew that it would be to my benefit to isolate the biggest chunks of that software flux to virtual machines that I could re-image on a whim, with the primary system none the wiser.

I’ve heard good things about VMWare Workstation, but I have an MSDN subscription which includes a copy of Microsoft Virtual PC 2004, and having seen it in use, I thought that it would suit my purposes admirably.

I had a bit of initial hesitation, in that I wasn’t sure how hard it was going to be to install and configure, what kind of performance I’d get out of it, and out of the system as a whole after it was installed, and how difficult it would be to interact with the various virtual machines.

My fears, however, were unfounded. Installing Virtual PC was a cinch. It wasn’t any different from any other software program. Creating a new virtual machine was done with a wizard that allows you to choose the locations for the virtual machine and disk files, and performance settings and the OS that you’ll be installing. (Tip: If you’re not sure about the performance settings, you can leave them at the defaults, and change them later.)

Once I had the virtual machine created, I started it from the Virtual PC panel, and it fired right up in a window, and asked for a boot disk. I dropped in the Windows XP Pro DVD, and installed it just as I would have on a physical machine.

After I’d installed all of the Windows Updates, stopped the services that I don’t need running, and generally gotten the virtual machine to a state that I was happy with, I shut it down and copied the machine and hard disk files to another folder, and renamed them, so that I will always have a good base install without having to go through the motions of actually re-installing Windows XP, SP2, and all of the following security updates. I can now have a fresh virtual machine up and running in a couple of minutes. There may be a built in mechanism to do this more cleanly than just copying the files in Windows Explorer like I did, but I didn’t find it in a quick perusal of the Virtual PC help files.

The performance, at first glance seems to be pretty good. The machine that it’s on is decent, but by no means spectacular. It’s got a 2.4GHz P4, 768MB of RAM, and the disk that the virtual machines reside on is a 7200RMP IDE drive with 8MB of cache. Using the virtual machine itself seems to have a response a bit slower than that that of my average Terminal Services session, and it seems to be quite acceptable. When I shut down the virtual machine completely, the performance of the host machine is unaffected…This is what I was most worried about. You can also configure the priority of the virtual machine processes when you have them running in the background. I’ve chosen to give the host machine priority in this scenario, but I haven’t had a chance yet to see how much this affects it.

All in all, this is a move that I’m definitely glad that I’ve made, and now I can run all of the beta software I want without ever again having to worry about those dire “You may need to re-format your drive” warnings. :-)


A browser security vulnerabilty has come to light, one that affect nearly every current browser, *EXCEPT* for Internet Explorer.
Firefox spoofing flaw goes international

What is the world coming to?


Rob Howard, of Telligent Systems gave a presentation last Thursday night at the Plano .Net User Group on their new project, Community Server. Community Server is the coming together of three of the premier .Net application available today…The ASP.Net Forums, .Text (a blogging engine) and NGallery, a picture gallery application. One thing that caught my eye was when he was explained how thoroughly they’ve disconnected Community Server’s form and the function. The heart of Community Server is driven by a substantial number of custom server controls…Controls that render no UI whatsoever. The UI is a generated by user controls that are injected in at runtime as a skin. It makes for an extraordinarily flexible system, and I think that I’m going to try to do something along those lines in my implementation of IssueTracker.

No, it doesn’t need it. IssueTracker is fine the way that it is. Why do I want to put the effort into this sort of enhancement, then? Primarily because it’d be fun to do, but it would, at least in a theoretical sense, also have a practical application. Bug tracking packages are used day in and day out, and a good design can make or break them. At the last company I worked for, that was the main reason that we abandoned Bugzilla in favor of a commercial app. Bugzilla did everything that we wanted (more than the app that we purchased to replace it) but it just looked bad, and it was a pain to use. We actually used the Red Hat build, which is substantially more pleasing to the eye than the standard build, but it wasn’t good enough, and it was simply too hard to do anything to it other than minor tweaks. With a UI that’s as dynamic as the UI that’s shipping with Community Server, it would be child’s play to update and modify until it suits the users that it’s been installed for. A single installation could even be skinned significantly differently for individual users. That’s some seriously awesome power for an application that’s all about productivity.


So, last month I quit my job as opened up shop as a consultant. It’s been an enjoyable adventure so far, and I’ve been posed with a fair number of new challenges, one of which was that I needed a bug-tracking solution.

Now I wanted an application with web, mobile, web service and Windows interfaces, because you never know how you might want to extend something like this. I wanted something in ASP.Net, because I’ve got machines set up to run it, and I’m more comfortable with ASP.Net then just about anything else. I also wanted it to either be open-source, or at least extensible to some measure so that I could tweak it.

I’ve used various commercial and free products in this space before…Primarily Bugzilla and a Windows-based commercial interface called IssueView. (It had a web interface as well, but we hadn’t implemented it.) They were ok, but they didn’t meet as many of my criteria as the ASP.Net IssueTracker Starter Kit.

The IssueTracker is open-source, written in C#, and comes with both web and mobile interfaces. Do I really *need* Windows and web service interfaces? Do I *need* to be able to tweak it? Truthfully, no, but the fact of the matter is, I like to tinker, and it looks like a fun project.

I’ve gotten it installed now, and over the course of the weekend I’ve had a chance to look it over, and start to make the changes that I want.

The first issue that I had is that all of the business logic is buried in the web application itself. That’s fine, for the most part, for the product as it ships, though it is kinda weird that the mobile application (which is in a separate solution entirely, as shipped) had a reference back to the web app. For my purposes, however, it just wouldn’t do. The last thing that I need for the eventual distributed Windows app interface is a reference to a web project, so I pulled all of the business logic and the data layer out into a separate core library (XseIssueTrackerCore), which turned out to be remarkably easy. The business layer and presentation layer didn’t turn out to be anywhere near as strongly-coupled as I feared they might be.

The second issue was more interesting. The business layer *was* coupled to the web interface in that one of the security classes was expecting an HttpContext to be passed in, which again, obviously won’t work for a Windows interface. It would, however, work for both the web and mobile interfaces and so instead of repatriating it to the web application, I broke it out into a library specific to HTTP interfaces which I called XseIssueTracker.Web. Naturally, I expected this to break the callers until I updated their references, but to my surprise, it didn’t. The solution built fine, ran fine, and otherwise appeared to run without a hitch. I need to do a global find at some point in the future to see if it’s being called somewhere weird that I haven’t found, or if it’s really and truly useless.

I’ve got a ton that I want to do to it…I can’t stop coming up with ideas, and I’ve got far more than I want to cover now, so I think that I’m going to continue this thread in the future. Until then….


My girlfriend called me up this evening with a question about custom formatted dates in VBScript. Specifically, she needed a date in MMDDYYYY format.

VBScript has a pretty (well, extremely) limited set of date formatting options, so if you want anything other than the basics, you have to be prepared to roll your own.

Personally, I tend to use the ISO format (YYYYMMDD or YYYY-MM-DD) for most of my application for clarity and standardization, though this is another format that VBScript won't generate out of the box.  So, here's a quicky function that will generate ISO formatted dates, and it should be easily adaptable to any other format that you might want to use in your applications.

Function GetDate(dateVal, delimiter)

	'To comply with Option Explict
	Dim dateMonth, dateDay
	dateVal = CDate(dateVal) 
		' Converts the dateVal parameter to a date.
		' This will cause an error and cause the function 
		' to exit if the value passed is not a real date.
	delimiter = CStr(delimiter)
		' Converts the delimiter parameter, which designates
		' the delimiting character between the datepart values
		' to a string value.  If you don't want a delimiting 
		' character, (such as / or -) then you'd simply pass
		' in an empty string for this parameter. 
	dateMonth = Month(dateVal)
 	dateDay   = Day(dateVal)
	GetDate = CStr(Year(dateVal)) & delimiter
	If dateMonth < 10 Then
		GetDate = GetDate & "0" 
	End If
	GetDate = GetDate & CStr(dateMonth) & delimiter

	If dateDay < 10 Then
		GetDate = GetDate & "0"
	End If
	GetDate = GetDate & CStr(dateDay)
End Function 

Now using the function is as simple as:

'Calling without a delimiting character
Response.Write(GetDate(Now, "") & "<br />")
'Or with a delimiting dash
Response.Write(GetDate(Now, "-") & "<br />")

More Posts Next page »