HttpRequest vs HttpRequestBase


I’m doing some conversion of an ASP.NET Webforms application over to MVC.  Some of the code depends heavily on the older model and the plumbing underneath – in this instance, HttpRequest.  MVC replaces this outright with its own class, HttpRequestBase.  The two types are code-compatible, i.e. you can usually replace one with the other.  But I have code that now might be called from Webforms or MVC.

Initially, I wrote a little wrapper for this, but as it turns out, there’s a class called System.Web.HttpRequestWrapper that inherits from HttpRequestBase.  The constructor for this class accepts the HttpRequest object as an argument, and exposes all its properties in a new facade.

You can substitute HttpRequestBase wherever you would be already specifying type HttpRequest.  Then, you must wrap any HttpRequest instances within a new HttpRequestWrapper, and you’re good to go.  Example:

string SomeFunction(HttpRequest request){
//do something here
return "Hello!";

Change it to:

string SomeFunction(HttpRequestBase request){
//do something here
return "Hello!";

And then, change the caller:

var x = SomeFunction(Page.Request);

Change it to this:

var x = SomeFunction(new HttpRequestWrapper(Page.Request))

I wrote a little wrapper.  The wrapper will do the work of passing method calls to whichever of the two types (mentioned above) that you need to wrap.  Enjoy.

    public class WrappedHttpRequest     {         private readonly HttpRequest _r;         private readonly HttpRequestBase _rb;         private readonly UnderlyingTypeEnum _underlyingType;         public WrappedHttpRequest(HttpRequest request)         {             _underlyingType = UnderlyingTypeEnum.HttpRequest;             _r = request;         }         public WrappedHttpRequest(HttpRequestBase request)         {             _underlyingType = UnderlyingTypeEnum.HttpRequestBase;             _rb = request;         }         public int ContentLength         {             get { return _underlyingType == UnderlyingTypeEnum.HttpRequest ? _r.ContentLength : _rb.ContentLength; }         }         public Uri Url         {             get { return _underlyingType == UnderlyingTypeEnum.HttpRequest ? _r.Url : _rb.Url; }         }         public string HttpMethod         {             get { return _underlyingType == UnderlyingTypeEnum.HttpRequest ? _r.HttpMethod : _rb.HttpMethod; }         }         public NameValueCollection QueryString         {             get { return _underlyingType == UnderlyingTypeEnum.HttpRequest ? _r.QueryString : _rb.QueryString; }         }         public NameValueCollection Form         {             get { return _underlyingType == UnderlyingTypeEnum.HttpRequest ? _r.Form : _rb.Form; }         }         public NameValueCollection Headers         {             get { return _underlyingType == UnderlyingTypeEnum.HttpRequest ? _r.Headers : _rb.Headers; }         }         public byte[] BinaryRead(int count)         {             if (_underlyingType == UnderlyingTypeEnum.HttpRequest)             {                 return _r.BinaryRead(count);             }             else             {                 return _rb.BinaryRead(count);             }         }         public static implicit operator WrappedHttpRequest(HttpRequest d)         {             return new WrappedHttpRequest(d);         }         public static implicit operator WrappedHttpRequest(HttpRequestBase d)         {             return new WrappedHttpRequest(d);         }         #region Nested type: UnderlyingTypeEnum         private enum UnderlyingTypeEnum         {             HttpRequest,             HttpRequestBase         }         #endregion     }


Posted in .NET Development, Uncategorized Tagged with: , , , , , ,

MVC vs WebForms, and A Look Back


Somehow I’ve managed to avoid it, but I’m now involved in a large project using Microsoft development technologies on the web, and the primary interface is MVC running on top of IIS. Ah, yes. It’s the much-talked-about MVC framework.

A majority of the recent requisitions I’ve seen out there (at least in Southern California) for developers like me list this if there’s anything to do with web development. There’s also Silverlight here and there, and sometimes Sharepoint (ew!). But I see MVC quite a bit right now, and I’ve asked myself, “what’s all the hype about?”

The MVC design pattern is nothing new. It’s been around since the 1970’s. I actually had a brush with it a few years back. I was using SAP NetWeaver Developer Studio, doing a little Java based development for SAP’s Web Dynpro. SAP’s web interface, as I recall, was decidedly plain-vanilla but for 85% of business applications, it probably would get the job done. There was that lousy part where you couldn’t use it with any browser except Internet Explorer, but I’ll save that for another day.

Now, truth-be-told: I came out of several years as a “classic” Visual Basic developer, and transitioned from there to “classic” ASP (Active Server Pages).  The leap from “classic” ASP to ASP.NET Webforms was a tremendous gain in stability and maintainability.  The Page model that Webforms offered, and then built upon for the next eight years, offered a quick way to get those business applications away from WinForms and on to the web.  Doing so didn’t require a rocket scientist, a Visual C++ programmer, or even an HTML expert.  It worked.

Somewhere in the midst, I think the Ajax monster sneaked out from under the bed and bit us.  Wow, partial page updates?  I think Ajax changed web development from being a simple interface into being a web experience.  I don’t want to sound clichéd, but realistically: where would the Facebooks and Twitters of the world be without Ajax?  Eventually Microsoft’s response was the Ajax Control Toolkit, but I just don’t think it could ever keep the pace with all the Ajax happenings. 

The PHP and Ruby guys, with the help of JQuery, were running circles around .NET developers – at least in terms of being able to customize the user interface.  On the back end, the .NET developers still had rock-solid C#, IIS, and SQL Server, but something was still amiss.  ASP.NET was becoming less and less “the” tool to be using.  There wasn’t anything wrong with C# per se, but the implementation was off kilter just a teeny bit.  On the other hand, those same PHP and Ruby developers seemed to have to write an awful amount of code for that UI stuff, which was something the ASP.NET guys didn’t necessarily have to deal with too often.

This is just my view.  You can agree, disagree, laud it, or shove it.  No hard feelings!

I read an older article today that stated that MVC puts the web back in web development for Microsoft platform developers.  It’s no longer enough to rely on built in controls to do the most simple implementations of HTML that the browsers will support, because the end-users are expecting more.  The corporate marketing departments are demanding more.  More, more, more.  HTML 5 is coming in, and Flash and Silverlight are going out (I’ll have to write more on this).  What’s next?  3d?

Is MVC the right thing to carry us to the next level?    I don’t think the answer is definitive.  I also think there’s a tipping point where for some jobs WebForms is still good enough, but past a certain level of need, MVC could serve that need better.  “Paint it” with the IDE, code to it, test it, deploy it.  It works.  So for the near near future, discovering that tipping point is going to be really interesting for me.


Posted in .NET Development, Software Development Tagged with: , , , , ,

Low-Density vs High Density RAM


I’m doing a little project at home and found the need to add RAM to an older (2007?) PC that I own.  I’d actually purchased some about two years back but I bungled the purchase – I bought “ECC” memory, which is for servers only, and it wouldn’t work.  I got busy and never returned it (and probably could have!)

So this time it’s non-ECC, non “registered”, unbuffered.  But the last thing was whether to buy “low” or “high” density.

What’s the difference?

So I need to buy DDR2 memory modules.  They’re basically a flat little component board about 4.5 inches by 1 inch in size.  The way they’re designed, they can hold eight little chips on each side of the board.

A module of 1gb or higher (not sure about smaller sizes), when “low density”, has 16 of these chips on it.  Eight on the front, and eight on the back.

But some unscrupulous manufacturers figured out a way to “beat” the system – and call the modules they sell “high density”.  They use four chips on each side, and the those chips have double the memory as the chips on the low density counterpart.  Fewer chips and less soldering means the manufacturer can save money on each unit.


  • A low-density 1gb module has 16 64mb chips,
  • and a high-density 1gb module has 8 128mb chips.

High density chips are off spec and won’t work at all in many situations (the PC motherboards reject them).  No store will take them back once used, except for an exchange perhaps.  Wow, what a hustle!

Caveat emptor!

Posted in Technology-Miscellaneous Tagged with: , , , ,

NAT Loopback (Hairpinning) on D-Link Routers


Auuugh! I finally got it.

I found the need to use NAT loopback – a.k.a. hairpinning – on my home router. I have an application running where I need to be able to access it from home and elsewhere without having to worry about having to switch between the WAN ip address and the LAN ip address.

Most home routers don’t allow this at all. And actually, I’d been searching up and down to maybe purchase another device (like I really need more junk around here, right?) that could do it.  Maybe one of those fancy routers that supports OpenWRT.  A business-class something.  And lo and behold, those guys at Frys or Micro Center almost had my hard earned cash, but curses, they’re foiled again.

Now, I’d seen this in the D-Link router settings (I have a DIR-655) but never really knew the difference.  To really use this device, I guess you have to have written instructions in somethingese, and I was lacking.

So this router has settings for “virtual server” and also “port forwarding”.  They look the same, but they aren’t.

Anything set up on “virtual server” will use NAT loopback.  Anything set up on “port forwarding” will not.  Wow, that was simple.

Thanks to Dan Larsen, who seems to have beat me to the punch by just a few months.


Posted in Technology-Miscellaneous Tagged with: , , , , , ,

WCF and Simultaneous Requests


I just finished a project where our team provided a new WCF service and an interface for an existing ASP.NET application maintained by another group.  The old application previously made direct connections to a back-end store, and our job was to supplant that older store because of extensive requirements changes.  The WCF service was developed on top of .NET Framework 4.0.

Part of this equation was the fact that the ASP.NET application would be used by several users at the same time, so we had to test for that.  Also some of the WCF calls would take a long time to return any results, but we didn’t want to leave the end users where they’d be waiting rather indefinitely for their browsers to return any results.  The two teams agreed that the WCF calls would time-out after two minutes, which we thought was reasonable.

During our testing, we saw some strangeness.  The WCF service ran on a load balancer spanning three nodes.  We actually saw a situation where users on different browsers would make simultaneous requests, but our WCF service wasn’t actually receiving those requests until several minutes after they were made.  Oddly enough, we set attributes on the WCF service’s service class to create a new process for each request coming across the wire, so it was a bit confusing.  The

There were two culprits — no, I’ll say three.

  • During a longer-running WCF request, the ASP.NET client application was spawning lots of other requests asynchronously for each user.  Some of the requests were calling the same method, with the same arguments.  The developers didn’t write it such that the system would make sure request “A” was complete before making request “B”.  This made for a LOT of requests being pushed across at the same time.
  • Our WCF service used WsHttpBinding (essentially, an https connection).  While the ASP.NET client generated request after request against our WCF service, in truth only two requests were coming out of its IIS host at any given time.  The HTTP request-for-comment docs dictate a standard that a client won’t make more than two requests simultaneously against a single server.  The .NET framework’s plumbing adheres to this standard.  So under the hood, requests actually get queued no matter whether the client application is ASP.NET, Winforms, WPF, or a console application.
  • The WCF application ran in IIS.  The default setting for IIS app pools running against .NET 4.0 is that the pool can create a maximum of 16 processes.  After the process limit has been reached, the app pool will queue incoming requests and it doesn’t even send them to your application (in this case, our WCF service) until other requests have finished.

And, some solutions:

  • By jingo, we got on the other team’s case about them writing an application that would flood our WCF service with requests.  So they scaled that back.
  • We didn’t tweak the IIS app pool settings to allow more processes to be created if needed.  This could be a future consideration, but we considered also that the WCF running on three nodes, and these nodes weren’t used exclusively for our WCF service.  A maximum of 48 simultaneous requests running (three nodes x 16 processes each) should be more than enough.
  • We had the ASP.NET developers add some settings to their web.config file:
    <add address="*"

Essentially what this little snippet did was to allow their code to make more than two http connections to our WCF service at the same time, so their application wouldn’t queue up requests from users.  Gold!

And to my reader, I hope this has been informative for you and maybe saves you from some grief!

Posted in .NET Development Tagged with: , , , , , ,

Webmin and Virtualmin


I guess I’ll share one of my Linux adventures with you.  I remember messing around with a Linux (slackware) installation for a little bit about 20 zillion years ago, but never really got into it.  I don’t regret it, because I’ve had some zany adventures with.. hmm.. maybe ten versions of Windows since then.  (Don’t quote me on that number).  But lately, maybe in the past two years, I’ve been itchin’ for an OS that just does more.  The folks up in Redmond seem to be developing according to the preferences of eighth-graders.

Part of the ordeal with Linux and any serious effort is administering the endless bells and whistles it offers.  I do a lot of work with web sites and that’s actually another layer of complexity on top of the base administrative tasks.  Now, each Linux flavor works just slightly different than the others, and that doesn’t necessarily help.  When I’m in a jam, sometimes I just want to get in, make the changes I need, and get out.  Often I need to do this remotely, on top of that.

In comes Webmin and Virtualmin.  Both are open-source.

Webmin is a package that installs in a couple of megabytes or so and takes about a minute to install completely.  When it’s done, you have a completely web-based administration panel for your Linux workstation or server.  I think it has controls for all of your most basic services – Samba, Rsync, MySql, Apache, Netbios Daemon, CUPS printing, IPtables, user accounts, cron, and the like.  It’s like Control Panel in Windows.

I installed Webmin and simply pointed my browser to http://[myservername]:10000.  Everything else is easy as pie.  I’m sure that there are some obscure configuration things that it doesn’t cover, so it’s not necessarily the holy grail.  But for the average to above-average person in need for an easy to use tool, this one is it.  It runs on Debian, Ubuntu,  Fedora, CentOS, SuSE and Mandrake Linux.

Virtualmin is actually an add-on module for Webmin.  It’s used for quickly configuring web sites (and related e-mail, databases, file permissions, and so forth) on Linux.  Specifically, it works with the Apache web server and BIND domain name server.  As both of these can be a real pain, it’s a real godsend.  There’s actually a Virtualmin Pro version, and I think it’s the same as the basic Virtualmin but with more features.  For me, the basic one is good enough for now.

Both of these packages install pretty easily.  The web site has instructions for the newbie.

Well, what are you waiting for?  Get to it! ;P







Posted in Adventures with Linux Tagged with: , , , , , , , , , , ,



Acidizer is a cool little tool I wrote back in… hmm… 1998?  I put it out as shareware in 2000, and made a decent little chunk of money having done so.  I’m re-re-releasing it for free.  It’s run its course, ya know?

What is it?

Acidizer is an add-on tool for users of the Sonic Foundry ACID suite. It is used to add extra information to wave audio files so that ACID will automatically recognize the file as a one-shot, a loop, or a disk-based audio file. It also aids ACID in knowing how to accurately pitch-shift or time-stretch a wave file.

I remember getting a letter from the Legal department at Sonic Foundry after I developed this and put it out there.  I could have pooped my pants, but basically they were asking that I take care not to soil their ACID trademark in any way.  As a matter of fact, they also shared with me that their engineers were using my tool internally.  Imagine that!

I lost the source code years ago… :P

Sonic Foundry ACID is no more – I think Sony bought it from them outright.  But I’m sure you ACID folks are still out there.  Have fun.


Posted in Technology-Miscellaneous Tagged with: , ,

WAN Routing on D-Link DIR-655 Router


Okay, so once upon a time I bought this so-so wireless router, a D-Link DIR-655.

I thought it was a step up from my Netgear Rangemax Wireless-N WNR3500.  (Wow, that’s so nerdy sounding) as I’d moved into a larger space and needed a wireless range that was better than the Netgear.  I was familiar with routers that supported OpenWRT for expandability, but frankly that day down at the local Fry’s Electronics, I really wasn’t ready to shell out an extra $100 for a router in that class.  So I ended up with the DIR-655.

I tend to be a bit of a road warrior, and I use OpenVPN on my home network so that I can hook up to various resources at home while I’m traveling.  OpenVPN is a great tool.  Out-of-the-box, however, it only allows you to connect to your OpenVPN server from wherever your client is running, unless you add static routes to your internal router so it can access other devices inside your network.

I’d done this with the Netgear router.  But I got the DIR-655 home and it seemed to only allow static routes to the WAN (addresses outside the router).  Then I ran into this article describing how the functionality was there, but some wise guy at D-Link had basically hidden it from the html interface.  Well, I don’t need to change my routing table too often but I didn’t feel like futzing around as does the writer of that article, so here’s a tool that maybe you can use too.


I have hardware version B and firmware version 2.00 NA.  I haven’t tested this with any router other than my own.  You would be using this tool at your own risk.  If you’re successful, it’d be nice to get a little thank-you.  If you’re not… since this actually just uses the web interface of the router, I don’t see much harm coming your way.

Best of luck.  Oh, you’ll need to have the Microsoft .NET Framework 2.0 installed.  This is a web-forms application and it should probably work with Mono if you’re running on Linux.

Download here:


Posted in Technology-Miscellaneous Tagged with: , , , ,

Wowee, I’ve Been Doing This Forever


I just thought I’d share some interesting stuff about how this all came together for me, for what it’s worth.

Long, long ago, in a not-so-prominent neighborhood not so far away from where I live now, I was in a program for the area’s gifted kids.   Computers weren’t the happening thing in most places just yet but I think it was a real blessing that by the time I got to fifth grade, our class had two TRS-80 Model III computers at our disposal.  That year, I learned BASIC.  Actually, my friend from that time would probably tell you that I was probably pretty zealous about my interest that year.

Maybe by the end of the same year, I had convinced my mom and grandma to invest some coin in a Timex/Sinclair 1000 for me.  It displayed in two colors – which didn’t matter much since I only had black-and-white on my television that it hooked up to.  Yes, the total price for the computer and the impressive 16 kilobyte memory expansion rang up to a whopping, neck-snapping forty-two dollars and fifty-nine cents with tax.  We bought it from the neighborhood Thrifty drug store, as a matter of fact.  I think the store had it in a glass display case along with the scientific calculators, as it was certainly small enough.  So I learned more BASIC, although a different flavor than that from the TRS-80.

I actually took a class in seventh grade.  Apple IIe and Atari 800 computers.  Mehhhhh, that was a waste of a semester.  I had a cool teacher but I remember that his approach to development was something from a prior era of computing.  He expected us to get an assignment from him, sit at our desks and write out a program, and then approach him for permission to go try our luck on the classroom full of computers.  It’s rather peculiar now that I think of it – the computer lab was actually a room that had been used for some kind of shop class (remember those, America?).  But not to get off the point.  This guy was from the punch-card mindset.  We were doing something totally, totally different.

So my last entry, before figuring out how to make a buck in the game, was my beloved Commodore 64.  I can’t really remember what the deciding factor was in jumping ship from my TS/1000, except that I know I’d grown weary of the fake keyboard and the fact that there seemed to be no software offerings for the thing whatsoever.  I think I spent an afternoon messing with one at the Mid-town Sears store, and eventually persuaded my mom (8th grade) to get me one.  It was about $300 with a 5.25″ disk drive and we bought it from the Toys-R-Us in Carson.  Of course, I didn’t want to do much more with BASIC; and somehow I figured out that visiting the Federated electronics store in Huntington Park was a good way to meet people who were also interested in what I was doing.  Eventually I got my hands on a 300 baud modem, got involved with the bulletin board system (BBS) scene around Los Angeles, and learned to code in 6502 assembly code, which ran on the C64.  The BBS years were the best.  I met a lot of great people, some of whom I’m still friends with to this day.  I also met a super-bright guy who worked at the Federated store in the computer department, and one day I asked him how the heck he could know as much as he did and only be twenty-one years old.  This guy, Dana Short, told me he’d learned a ton of stuff at a math-science magnet high school called Narbonne – and I ended up there for three GREAT years.  (Thanks Dana!  You’re still the man!!!)

That’s enough for tonight, folks.  Until next time.

Posted in Software Development

Xen, My New Almost-Best-Friend


I’ve been having a ball for the last couple of weeks, in a sense.

For our family business, I set up a virtual private server about two years back over at GoDaddy.  The purpose was to host a handful of sites that we were maintaining, and I’ve been building on another unreleased project on and off for some time now.  It worked well for a while.  I have the Windows package that runs about $40.00 a month.  I also added a service called Plesk that makes managing all the stuff for the web sites a little more palatable.  (I could do the stuff by hand, but who has the time?)

For the last few months, however, the Plesk piece of the puzzle has been painful.  I’ve received email after email telling me that the Plesk license has expired (something Godaddy is supposed to be responsible for renewing).  I get in touch with the GoDaddy people, they fix the problem, and it’s right back to square one in a week or two.  Frankly I’ve actually not had the time (or interest) to measure out how long it goes between outages, but it is frustrating.

Hosting providers run a dime-a-dozen.  GoDaddy just happens to be the 900 lb gorilla in the industry.  I think it’s really a situation where you run through their offerings because it’s convenient, until you find something equivalent and reliable out there.  So my dilemma – where to go?  Linux VPS hosting is everywhere, but Windows hosting is a bit more specialized.

Luckily, I’ve been playing around with a server virtualization platform, that runs on Linux, which might do the trick.  It’s called Xen Hypervisor and for me it’s frankly the best thing out there since sliced bread.  Now, what makes this one so special?

There are several flavors of similar software out there.  Microsoft’s got its Hyper-V platform (formerly Connectix Virtual PC).  There’s VMWare.  These are the two main contenders.  Oracle has its VirtualBox.  All of these packages run on top of some other OS.  Okay, so if you’re a little tech savvy, you’re asking by now, “well, doesn’t it have to?”  You’d be half right.

A virtual server really only needs to know how to hook whatever it’s running virtually to the host machine.  That’s all.  Anything else running in the host layer is overhead and will consume memory, CPU utilization, and I/O unnecessarily.  This has been my one pet peeve about virtualization for quite a while.

With Xen, there’s a tiny layer running above a virtual machine instead of a whole operating system.  The net result is that Xen outperforms the other guys by almost double.

Xen does require a minimal Linux operating system to run as one of its virtual machines (termed “domains”).  All this domain (known as dom0) does is that it’s there to communicate with the Xen layer running above it as an interface for the user to create, start, stop, reboot, and delete other domains.  I’ve actually got a box where the Xen dom0, an Ubuntu 12.04 server instance, runs in 371 kb of RAM.  Windows, eat your heart out!

I can run Windows and Linux instances side-by-side as Xen domains and they’ll never “know” that they’re doing so.  The offerings for pre-built virtual appliances are tempting — and I can convert many of the ones out there to run on Xen with a just a little effort.  Xen’s nothing new, but this is just the beginning for me.

And soon, I can kick GoDaddy to the curb.  Yayyyyy me!



Posted in Server Virtualization Tagged with: , , ,