Welcome to AspAdvice Sign in | Join | Help

OpenXML Standardized and Sour Grapes

Earlier this week, the OpenXML document format was standardized by the ISO/IEC, with a huge 86% of voting countries favoring the format (news here, among other places).  While this is interesting and a win for anyone using Microsoft Office document formats (who isn't?), it's also a bit disappointing that those who opposed the format's standardization have opted not to accept the decision of the global community.  Instead, they've shifted into sour grapes mode and are attacking the process and everyone involved (at least the ones who didn't agree with their perspective).  Jan van den Beld has a great writeup of the accusations being thrown about by the folks who couldn't enforce their will upon the world through proper channels - you should check it out if you're at all interested in this topic.

Posted by ssmith | 5 Comments

CruiseControl.NET Caching Old Project Locations SOLVED

As I mentioned in my previous post, we're just wrapping up a continuous integration solution for a client (and if you're not using this for your team, you should be.  If you don't have time to do it, contact us to do it for you.  You'll thank me later.) and one of the last requirements changes was an update to where on the build server's hard drive the project files should reside once they're checked out from source control.  After making this change in the cc.net.config file for the various ccnet projects, and also making the change in the source control provider's working folder association for the build account username, I figured things would just work.  I forgot about one thing, and it caused me frustration for the better part of a day.  That thing was the CCNET state files.

After making my changes from d:\buildserver\source\ to the new location in the ccnet.config file (and doing a find and replace to be sure I hadn't missed it anywhere), I started looking for places the source control client may have been caching the old working folder associations.  I went down this road for a while, and did find a bunch of places where the client was storing settings, but nothing with this location.  And my builds were all failing because this folder no longer existed, so the attempts to perform a get from source control were failing.  I searched the registry - nothing.  I searched the entire file system (all disks!) - nothing.  Unfortunately this was not my server and it had the default configuration on it of not searching all files (Scott Forsyth at Orcsweb has the registry hack to correct this, detailed here).  In an act of desperation, I had the client restart the server for me.  Still no good.

At this point I grabbed the CCNET source and started going through it to try and find where it was getting that path from.  Being open source is great for this, but of course it's not like I can search the source for the folder name I'm looking for, and CCNET deals with a bunch of different folders (both in source control and on disk) so it's a bit tough to follow if you're unfamiliar with the code exactly which variables refer to which paths in the build process.  As I was getting close to finding the problem in the source, I noticed something in my CCNET /server folder - a bunch of [PROJECTNAME].state files.  This was one of those combination "ah-ha!" and "oh, man, was it really that easy" moments.

Yes, it turns out that if you open up these state files, they include, among other things, the location on disk for the project.  If you're going to search the file system for a particular string in a file, be sure you're searching all file extensions since various vendors will make up all kinds of extensions and put text into them.  If you update ccnet.config to use a new workingDirectory it won't care because it's going to look at what is in the state file instead.  You can configure the state file location in your ccnet.exe.config or ccservice.config - by default you'll find them co-located with these EXEs.  Just remember to look for them.

The state files include the following elements - if you see any of these being cached by Cruise Control, go delete the state files and it should stop caching them.

  • ProjectName
  • ProjectUrl
  • BuildCondition
  • Label
  • WorkingDirectory
  • ArtifactDirectory
  • Status
  • StartTime
  • EndTime
  • LastIntegrationStatus
  • LastSuccessfulIntegrationLabel
Posted by ssmith | 2 Comments

Not Working for Microsoft

I thought I should post a follow-up since I'm sure many people ready my post on Tuesday about going to work for Microsoft on some code-named project and (a) didn't remember it was April Fool's Day and (b) didn't then read the comments where I pointed out that it was a joke.  I'm still happily self-employed and working on Lake Quincy Media, ASPAlliance.com, and a new consulting business that so far is trying to focus on agile development and in particular setup of continuous integration for shops that don't already have it (just wrapping up one of these).

I like being my own boss (or at least, having my wife as my boss - she's the CEO of Lake Quincy and runs the show there), but if I ever did go to work for somebody, it would probably be Microsoft.  So... maybe some day, but not yet.

Posted by ssmith | 2 Comments
Filed under:

Refactor Request

Not sure if it's already there, but the folks at DevExpress or JetBrains (or Microsoft, but I don't want to wait for another VS) should have a refactoring for CodeRush/Refactor! or Resharper that will convert verbose properties into C# 3.5 short properties, like so:

Make this:

protected bool isSponsored = false;
public bool IsSponsored
{
    get { return isSponsored; }
    set { isSponsored = value; }
}

Into this:

public bool IsSponsored { get; set; }

 

Ideally, it should work in two forms:

1) Right click on the property (field name let's say) and offer it as an option in that context.

2) Apply to all properties in a class.

And it should only do it if it is safe, of course.  If I had set isSponsored to true in the original code, the refactoring wouldn't have been possible, since it would have changed the behavior.

Of course, if some enterprising developer has already done this, please comment with a link.  Maybe Keyvan's done it - he just published a book on VS extensibility...

Posted by ssmith | 8 Comments
Filed under: ,

Silverlight Rehab

Check out Dan Fernandez, Adam Kinney, and others in this very funny 5 minutes video about Silverlight Rehab on On10.net.

Posted by ssmith | (Comments Off)
Filed under:

The Evolution of Status Pattern

An interesting pattern that I see in many of the applications I've worked with is the notion of status, and how it tends to evolve over time.  This is probably familiar to most of you, though perhaps you've never thought about it.  Consider the following scenario:

Requirement - The system should have Users, to control access via authentication.

At this stage, the developer creates a User class and a User table with a few fields like UserId, UserName, Password, Email, etc.  Status is implicit - if there exists a row that matches the given UserId (or UserId and Password for authentication), then the user's status is valid.  Otherwise, not.  Ah, the beauty that is simplicity.

Invariably, requirements change, and scope creeps...

Requirement - Administrators should be able to disable users without deleting the record.

At this stage, the design is updated to include some kind of flag to say whether or not the User is enabled or disabled.  In my applications this usually comes in the form of an Enabled bit column in the database (Defaulting to 1) and a corresponding Enabled bool property in the associated User class.  This refactoring involves a bit of work to anything that works with Users, including authentication and lists of users (where only active users should be listed).

You'd think this would be the end of it.  But no, it gets better...

Requirement - New users should be pending until approved by an admin.

At this point, we've surpassed the capabilities of a bit/boolean.  A nullable bit might still get us by (since our needs are now ternary), but I usually bite the bullet and go with a Status field at this point.  It's a bit of a hack to use a null state as a valid state for this, I think.  So at this point a new table in the database is created, UserStatus, which has an ID and a Name and includes rows for Pending, Active, and Disabled or something similar.  The User table is updated to include a UserStatusID column (foreign key) and the User class is hooked up with an enum or reference to a UserStatus object.  Refactoring this involves removing the Enabled field, revising all tests and references to it so that anything looking for Enabled = true is now looking for UserStatus = UserStatus.Approved.  Various queries for lists of users must now be updated as well, which might involve work in stored procedures or generated DAL code (LLBLGen, LINQ, NHibernate, whatever).

Really, this should be good enough.  But no, sometimes the evolution continues...

Requirement - Pending users should be either Approved or Rejected within 24 hours, and the user who changes their status must be logged.

Now things really start to get interesting, since a log of changes is required.  At this point the question of whether the afore-created UserStatusID column is still required, or if the status of a user can easily be determined by looking at the last action that was performed on it in its log file.  The UserStatusLog table is going to include an ID, a UserID, a NewStatusID, a DateCreated datetime field, and an AdminUserID to record who made the change.  Depending on performance considerations, we might want to refactor away UserStatusID on the User table and just grab the most recent NewStatusID for this UserID from the UserStatusLog table instead.  This would make for a smaller footprint for the User table, while making checks of status much less performant (but it's a more normalized appoach).  Assuming you'll be using some kind of caching in the business tier, it shouldn't make a huge difference until you start doing things like trying to index your queries on user status, and then you'll probably want to denormalize things and add the column back in.  So, to save time on that, I would just keep it around and make sure it's updated and kept in sync with the log (production database tests are good for this).  Having made that decision, the only refactoring that needs made is in the code that updates UserStatus, to ensure that the change is logged.  I would do this in the business layer, typically, but it could also be done at the DAL, sproc, or even trigger level depending on how you want to architect it.

Sometimes, this is (finally) sufficient, but occasionally you end up with something even worse (which I think is probably just a bad design, but we'll address it anyway):

Requirement - Users who were deleted but later reinstated should be formatted differently in the UI to make it clear they're on probation.

I'm stretching for a scenario here, I realize.  The thing I'm going for is a status that depends on a series of prior status changes.  In this case, you could probably get away with creating a new status, Probation, and updating everything to use this (and otherwise treating it like Approved).  But in some scenarios the number of variations of status can be enough that you don't want to just keep adding static Status options, but rather you need to use some kind of formula based on the log of events to come up with a dynamic status.  This is the ugliest version of this evolution, and one you should really seek to avoid if possible.  The complexity of the schema is awful considering what seems like a simple enough task, and usually the solution is to separate out logged status events into multiple categories that each have only a small, known subset of values.  It's also important to keep dynamic status options (e.g. they're Active if they've logged in in the last day, otherwise Inactive) out of the database and in business rules.  You don't want to be doing table updates for statuses that change based on the passage of time, if you can avoid it.

 

That summarizes my observations on the Evolution of Status Pattern in the applications I've worked with.  You really don't want to just start out with the Status lookup table unless you are absolutely certain you're going to need it, because otherwise it will just slow things down.  Remember YAGNI.  As long as you have well-factored code and tests for your interactions, the refactorings at each incremental change will not be too difficult.

Posted by ssmith | 1 Comments

Outlook Performance Tips

I've been living with Outlook 2007 since it shipped, and it's been pretty painful, but my life is in it so I'm stuck with it.  I've posted some Outlook tips in the past about how to deal with it not shutting down properly, and that has grown into a fairly sizable post with dozens of comments (and #1 for the search term outlook did not shut down properly.  Nice.).  Anyway, I have some additional tips that I thought would be worth sharing.

Use Exchange If You Can

In general, most of the people I've talked to say that Outlook 2007 doesn't have any huge performance problems when it is working with Exchange.  This makes perfect sense to me since I'm sure Microsoft uses Exchange extensively and they probably would have noticed if its performance were complete poo in this scenario.  If you can, use Outlook with Exchange to get the best experience.

Avoid Touching Multiple PSTs At Once

The biggest issue with performance that I notice with Outlook 2007 is disk access.  The program goes nuts with disk access any time it's doing anything (and sometimes when it doesn't appear to be doing anything).  What makes this worse is if you have several PST files (because of course you want them to be small - see below) and you've set up rules to move things automatically between your main PST and one or more others.  This takes the disk access problem from bad to horrendous, and will really slow things down.  Ideally, you should have everything dump into one PST file.  If you need to keep it from getting too big, you should periodically archive it to one or more others (and don't plan on Outlook being responsive while you do so - do it before heading to lunch or something).

Keep Your PSTs Small and Defragged

Outlook is heavily disk IO bound with its data store.  Watch your hard drive light while it's checking email to see this in action.  The larger your PST files get, the longer they take to read and write and the more likely they are to be fragmented all over your physical disk.  Since your disk is most likely the biggest bottleneck in your whole computer, you want to avoid this as much as possible.  You can also go buy a faster drive, but barring that, you should keep your PST files under 1GB if possible and keep individual folders under 5,000 or so items in it (YMMV - it depends on what they are).  Your ideal setup should be 2 PST files, one that is small that all new mail arrives in (and is filed into) and another that is largely static but is periodically filled via archiving (and frequently backed up).

I still find that POP3 access via Outlook is abysmal.  I haven't found any fix for this yet, but the above tips should help improve your Outlook performance.

Posted by ssmith | 1 Comments

LINQ and the new DevExpress Grid

Mehul has a couple of screencasts up on his blog that demonstrate how to use their new LINQ datasource to do optimized paging/updating of their ASPxGridView control.  At just over 2 minutes, the screencast does a very good job of showing how easy it is to set up LINQ to SQL (not that that hasn't been done before, but the more times you see it, the more likely it will stick).  That's more than half the time, then there's just a few mouse clicks to wire up the Grid control with the DataSource control, and you can see the thing in action.

I've seen demos of DevExpress's data transfer technology and it's really sweet because it only sends the bare minimum of data on the wire both to and from the server.  This demo doesn't quite do it justice since Mehul's only working on localhost, but where it really shines is in a real Internet scenario where the clients have some lag time between them and the server.  Avoiding full postbacks (even with UpdatePanels) makes the control far more responsive, especially with large amounts of data, than the stock grid and data controls.

Posted by ssmith | (Comments Off)
Filed under: ,

TechEd 2008 Birds Of A Feather

If you'll be attending Tech Ed 2008 (Developers, USA), you may want to come to some Birds of a Feather sessions, which allow you to join a discussion with peers on a topic of interest.  How are the topics chosen?  Well, funny you should ask that - you get to help choose them by voting for the sessions that are of greatest interest to you.  The thing that differentiates BOF sessions from the usual technical presentations is that the organizer is there to moderate the discussion, not to lecture and present.  There are typically no projectors or powerpoint decks involved in a BOF session, just a bunch of chairs and perhaps (if you're lucky) some whiteboards.  It's a great way to learn more about a topic from more than one person's experience, and to have the opportunity to ask questions (and share answers) of your own.

I'm involved in a couple of sessions that I hope you'll vote for:

Online Advertising for Developers

In this session, we'll discuss various online advertising models and technologies, what we as developers should know about them, and how to optimize our customers' and our own web properties to best take advantage of available sponsorship opportunities. The moderator brings ten years of online advertising experience, and welcomes questions and suggestions from all who attend.

Going Solo (co-moderating with Julie Lerman)

Have you ever thought of going independent? This session aims to bring together independent developers with those who have toyed with the idea to share advice, lessons learned and more.

So, please go vote for your favorites now:

https://www.msteched.com/dev/voting.aspx

Posted by ssmith | (Comments Off)

Two Kinds of Knowledge

Rick posted earlier today about how he's having a tougher and tougher time remembering the exact syntax and details of how to do relatively simple programming tasks, and instead finds that he's going off to find past code he's written (or blogged about) all the time.  Is it the early onset of senility, or is this typical?  It's the same for me, and it reminds me of something I learned in high school about things one knows.  There are two kinds of knowledge (and this predated the Internet so the latter was not nearly as easily accessed): Things You Know and Things You Know Where To Find.

Things You Know

Things you know are stored "by value" in your brain.  I call this intrinsic knowledge.  You actually have the data in there, ready to pull it out at a moment's notice.  This includes a ton of numerical data, like your social security number or phone number, zip code, etc as well as lots and lots of other kinds of data.  Your language skills, the names of everyone you know (whose name you can remember :) ), etc.  This is what most people think of when they think about what they know, and games like Trivial Pursuit are built around this kind of knowledge.

Things You Know Where To Find

The second kind of knowledge consists of things you've stored "by reference" in your brain.  The actual data is extrinsic knowledge, but the learned skill the individual possesses is how to find it. Today, you may not know your best friend's phone number, but you know where to find it (in your cell phone).  Most people who grew up before the 90s learned how to use a phone book (without a built in search!) and encyclopedia (before wikipedia!).  Today of course, you can quickly locate all kinds of things from trivia to weather to movie times with a few mouse clicks, and more and more often, from your phone.  The need to actually store information has greatly diminished when compared to the need to know how to access information (which can be stored far more efficiently).

Effects of this Shift on Developers

More and more, developers rely on the Internet to quickly find answers to their problems.  Sites like ASPAlliance.com and others like them were created by this demand, which of course has its share of pros and cons.  On the plus side, developers can move past problems much faster if they don't have to go trolling through a 3-ring binder with the latest spec from the vendor on the parameter ordering for some API and can instead just grab a sample online.  However, this assumes that the sample they found online is actually correct, which oftentimes is a faulty assumption.  For one thing, the developer may think that if the code works, it must be correct, but unfortunately there's more to good code than just compiling and returning the expected result.  A lot of examples, especially simple ones, have serious problems with error handling, performance, security, or all of the above and more.  A developer who doesn't have the innate knowledge required to see these issues in the code they find may be able to churn out working code at an acceptable pace (for the project and its managers), but the quality of that code will be seriously questionable.

Rick raised the question of how would he do in an interview, if asked to write out from memory the code required to fill a DataSet from an adapter (probably not good, he thought).  In my opinion, this is not a terribly useful interview question.  In my own interviews, I ask candidates to actually write a working program (a small one), using the tools I expect them to use while on the job (visual studio and full Internet access).  Seeing what they're able to come up with in this environment is far more telling, and lets me evaluate both their intrinsic knowledge and their ability to find what they don't "just know".  I'm not going to intentionally cripple my developers by cutting them off from the Internet, so interviewing them in that context is a waste of everybody's time.  But I do want to see that they are able to utilize their tools, including the Internet, effectively, and that they are able to properly evaluate and utilize what they find there.

Another useful interview technique is to take some simple code with problems (no exception handling, performance issues, resource leaks, bad need of refactoring, etc.  All the crap you find in 90% of the online examples someone will grab) and ask them to fix it.  Depending on the level of the candidate, you might just give them that much to go on, or you might ask them to provide some proper error handling or resource cleanup (e.g. the using() statement).

One good way to avoid the risk of using lousy code when searching the Internet is to use your own code.  My first articles were all written for my own use, and only later became popular with others online.  As a consultant, I wanted to be able to store my own personal notes somewhere I could access them from any client, and so I added them to my column on ASPAlliance.com as far back as 1998.  I got tired of reinventing the wheel with regular expressions, so I created the Regular Expression Library, regexlib.com.  With the ubiquity of blogs today, it's a simple thing for any developer to keep their own collection of code snippets and samples available online for their own use, and I think this is a great way to get the best of both worlds.  Not only that, but if you're concerned about how you'll do in an interview, I think most employers would be impressed by your ability to locate solid examples of how to accomplish typical programming tasks on your own blog or web site.  I know I would be.

Posted by ssmith | 6 Comments

Tweak web.config To Set Compilation Debug False

ASP.NET applications should never run with <compilation debug="true"> in production.  It can have drastic performance implications (of the negative kind).  Obviously, in a perfect world, developers would always remember to verify this setting whenever they upload changes to production, but unfortunately many organizations utilize fallible humans in their deployment process, and this is something that is easily missed.

As part of an automated build process, this problem can be eliminated fairly easily.  Most sections within web.config can be extracted to separate files (using the configSource="{path}" attribute), and separate files can be pulled in for TEST, STAGE, and PRODUCTION environments.  However, the bulk of the <system.web> section will likely need to be the same between all three of these environments, so maintaining separate versions of this configuration element would violate DRY and would be prone to problems.  The solution in this case is to keep these settings in the main web.config file, and tweak them as part of the deployment process within the automated build.  If you're using Web Deployment Projects, they can help in this case.  If you're not, keep reading.

The easiest way to accomplish the modification of the web.config file is with an EXE that can be called from MSBuild, NAnt, CCNET, or whatever build automation software you're using.  If you're only using one of these, it might make sense to create a custom MSBuild or NAnt task just for this purpose, but having the EXE is a bit more general purpose as it can then be called from any of these, or even from a batch file.  I decided to name the EXE TweakConfig, and while it includes some code for checking parameters and such, its main function boils down to this (thanks Dan Wahlin for the original version of this code):

        private static void ModifyDebugValue(string path, bool debugState)
        {
            XmlDocument doc = new XmlDocument();
            doc.Load(path);
            XmlElement compile = 
doc.DocumentElement.SelectSingleNode("system.web/compilation") as XmlElement; if (compile != null) { compile.SetAttribute("debug", debugState.ToString().ToLower()); } doc.Save(path); } For example:
c:\>tweakconfig.exe web.config debug=false

would set the <compilation debug="true|false"> section to false.

We built this into a continuous integration solution for a client last week, and it's working great.  I've been helping a few different companies with their continuous integration server setup (with CruiseControl.NET), and wrote a white paper a couple of months ago for Microsoft on the topic (with TFS 2008), so this is an area I'm spending a fair bit of time on lately.  If you'd like help getting up to speed with automated builds and continuous integration for your company, feel free to contact me.

I've made the source project and the EXE available.  If you find any bugs or enhance it, please email me and I'll update my files.

Posted by ssmith | 3 Comments
Filed under: , ,

Book - Cryptonomicon

One of the things I want to blog about periodically is what I've been reading, and a few of the things I've read lately have actually not been about software development (which is a good thing, if somewhat rare the last few years).  One book I finished last year is the Neil Stephenson's Cryptonomicon:


Cryptonomicon

This was a wonderful, very intelligent book.  It did a pretty good job of making me feel like my vocabulary was completely inadequate, since it seemed like every few pages the author was using words I was unfamiliar with, or lengthy metaphors which were at times difficult for me to follow.  Very humbling - if you find this book an easy read, I bow down to your language skills.

In addition to its being somewhat of a challenge to read due to its high language bar, the book does a great job of incorporating some technical content, most prominently cryptography (as you might have guessed), in a manner that is neither dry nor distracting from the plot of the story.  The story itself is told across two separate generations and takes place in what might be the near future in one thread and during World War II in another.  The weaving together of both threads, and their resolution as the story reaches its conclusion, is very entertaining.

I definitely recommend this book to anyone who enjoys a techie thriller, WWII history, or science fiction.  Most geeks will like it, I think, and I enjoyed it enough that I've picked up a few other Stephenson titles (but haven't yet had a chance to read them).

Posted by ssmith | (Comments Off)
Filed under:

SQL 2005 Tools Install Experience is the suck

Just finished building a couple of ultimate developer rig machines for the office for Brendan and me, and was adding software today.  So I installed Office, Visual Studio 2008, and then SQL Server 2005.  I'd forgotten that installing SQL 2005 client tools seems to require sacrificing a chicken under the right lunar conditions in order to get it right!  I've blogged about this same issue before, but apparently it gets [sarcasm]better[/sarcasm] with x64.

I did my due diligence and searched for the answer after the previous steps Brendan outlined failed with Vista 64 and SQL 2005 x64.  I found a blog entry that sounded promising, that involved running setup.exe with the SKUUPGRADE=1 parameter.  This failed.

But I did find the answerThe trick is to browse to the Tools folder and run SqlRun_Tools.exe directly.

This WORKS!  Here's the full path, on my CD (MSDN):

{drive}\ENGLISH\SQL2005\DEVELOPER\SQL Server x64\Tools\Setup\SqlRun_Tools.exe

Whew.  Glad to get that working.  But let's revisit the process of installing, and compare the Visual Studio install experience with the SQL Client Tools install experience.  We'll start with Visual Studio.

Visual Studio 2008 Install Experience

1) Put in DVD
2) Click the Install Visual Studio link
3) Click Next a couple of times.  Verify your license key.  Pick what to install.  Next.
4) Everything it needs gets installed in N minutes without restarts or user intervention.
5) Stick a fork in it - it's done.

SQL Server 2005 Install Hell Experience

1) Put in MSDN DVD
2) Tell browser it's ok to show active content in it so the menu comes up.
3) Scratch head about which version of Developer you want to install - pick one (SQL Server 2005 Developer Edition - 64-bit Extended (English)).
4) Opens up Windows Explorer in the root folder with no further instructions.[d:\ENGLISH\SQL2005\DEVELOPER]  Also there are no executables in this folder, no MSI files, and three subfolders (SQL Server Itanium, SQL Server x64, SQL Server x86).  WTF?  Didn't I just tell you I wanted 64-bit non-Itanium?
5) *Guess* that SQL Server x64 folder is where you'll actually find the installer.
6) Nope.  Found folders for Servers and Tools.  What was I trying to install again?  Oh hell, let's try Servers.
7) *Guess* that Setup.exe is what we want here.  Run it.  This part actually works.  Mostly.  Except it won't install my client tools.  And it says IIS isn't installed.  So I install it, but it still doesn't see it.  I let it finish.  I reboot.  I try it again.  It still can't see it.  I say screw it I still need management studio because that failed.
8) Go back to step 6 and pick Tools.
9) Run Setup.exe.  It fails saying the tools are already installed (SQL Express comes with Visual Studio, remember).  Say Next.  Modal dialog telling me I fail.  Look for back button so I can tell it to just go ahead and overwrite the install.  There isn't one.  Click Cancel.  It reminds me how dumb I am to try and do this.
10) Go Google for while about this issue.
11) Figure out that if you just go to the SETUP folder under TOOLS and then click SqlRun_Tools.exe, it will actually install the tools.  Wonder to oneself why the SETUP.exe file in the TOOLS folder doesn't just call this to begin with and save me the trouble.
12) Poke something sharp in eye to distract from pain of SQL Server install process

Seriously, I love the install experience for Vista.  I love the install experience for Office.  I love the install experience for Visual Studio.  Please send the SQL installer team off to remedial installer training with any of these teams!

kick it on DotNetKicks.com
Posted by ssmith | 38 Comments
Filed under: ,

Gotta Love Orcsweb

Does your host do this?  Today I got an IM from one of my support folks at ORCS Web, asking me about one of my dedicated servers on which I have installed SQL Express for a few small web apps.  She had noticed that I'd forgotten to set up any kind of backups for these, and wanted to know if I wanted any.  When I admitted that I'd forgotten to set them up, she just took care of it for me.  How cool is that?  They're also good about letting me know when I have stuff I'm paying for but no longer using - that's called awesome customer service and that's how you can turn customers into fans and evangelists.

Posted by ssmith | 5 Comments
Filed under:

MVC Source Code Available

The ASP.NET MVC library's source code has been made available on CodePlex, ScottGu announced today.  The project itself is still very much a work in progress, but with this you should be able to get a much better idea of how everything is laid out and working.  I think it's great the Microsoft is being this transparent with their development efforts, and the amount of community feedback they are accepting and acting on with this project is outstanding.  For a variety of legal reasons they are not accepting patches to the code directly, but that doesn't mean that reproduced bugs and even blog entries noting changes you'd like to see will not be used to improve the product.  And of course, you can fix the bug in your local copy and avoid having to wait for a fix from Microsoft.

Another thing that I like about this is that it shows (one way) how to build an MVC platform.  There are of course other MVC implementations for ASP.NET already, but the one that most developers going forward will use (say, a year from now) will be the one coming out of Redmond, so I think this one is important.  Anyway, the reason I care about this is that in addition to learning about MVC I'm also trying my best to become a Silverlight expert, and one thing that is clear about the default mode of building Silverlight applications is that it is an awful lot like web forms.  I'm hoping someone (I doubt I'll have time myself) takes the MVC source and ports the important parts of it over to a Silverlight implementation, for use in building full-blown applications in Silverlight.  Anybody want to build Silverlight MVC?

Posted by ssmith | 1 Comments