After a while cross-posting, I decided to retire this blog in favor of my blog on Los Techies. Hope you’ll join the conversation there!
After a while cross-posting, I decided to retire this blog in favor of my blog on Los Techies. Hope you’ll join the conversation there!
One of the new features of ASP.NET MVC 3 is a controller-level attribute to control the availability of session state. In the RC the attribute, which lives in the System.Web.SessionState
namespace, is [ControllerSessionState]
; for RTM ScottGu says it will be renamed simply [SessionState]
. The attribute accepts a SessionStateBehavior
argument, one of Default
, Disabled
, ReadOnly
or Required
. A question that came up during a Twitter discussion a few weeks back is how the different behaviors affect Html.RenderAction()
, so I decided to find out.
I started with an empty MVC 3 project and the Razor view engine. We’ll let a view model figure out what’s going on with our controller’s Session
:
public class SessionModel { public SessionModel(Controller controller, bool delaySession = false) { SessionID = delaySession ? "delayed" : GetSessionId(controller.Session); Controller = controller.GetType().Name; } public string SessionID { get; private set; } public string Controller { get; private set; } private static string GetSessionId(HttpSessionStateBase session) { try { return session == null ? "null" : session.SessionID; } catch (Exception ex) { return "Error: " + ex.Message; } } }
The model is rendered by two shared views. Index.cshtml
gives us some simple navigation and renders actions from our various test controllers:
@model SessionStateTest.Models.SessionModel @{ View.Title = Model.Controller; Layout = "~/Views/Shared/_Layout.cshtml"; } <h2>Host: @Model.Controller (@Model.SessionID)</h2> <ul> <li>@Html.ActionLink("No Attribute", "Index", "Home")</li> <li>@Html.ActionLink("Exception", "Index", "Exception")</li> <li>@Html.ActionLink("Default", "Index", "DefaultSession")</li> <li>@Html.ActionLink("Disabled", "Index", "DisabledSession")</li> <li>@Html.ActionLink("ReadOnly", "Index", "ReadOnlySession")</li> <li>@Html.ActionLink("Required", "Index", "RequiredSession")</li> </ul> @{ Html.RenderAction("Partial", "Home"); Html.RenderAction("Partial", "Exception"); Html.RenderAction("Partial", "DefaultSession"); Html.RenderAction("Partial", "DisabledSession"); Html.RenderAction("Partial", "ReadOnlySession"); Html.RenderAction("Partial", "RequiredSession"); }
Partial.cshtml
just dumps the model:
@model SessionStateTest.Models.SessionModel <div>Partial: @Model.Controller (@Model.SessionID)</div>
Finally, we need a few test controllers which will all inherit from a simple HomeController
:
public class HomeController : Controller { public virtual ActionResult Index() { return View(new SessionModel(this)); } public ActionResult Partial() { return View(new SessionModel(this)); } } [ControllerSessionState(SessionStateBehavior.Default)] public class DefaultSessionController : HomeController { } [ControllerSessionState(SessionStateBehavior.Disabled)] public class DisabledSessionController : HomeController { } [ControllerSessionState(SessionStateBehavior.ReadOnly)] public class ReadOnlySessionController : HomeController { } [ControllerSessionState(SessionStateBehavior.Required)] public class RequiredSessionController : HomeController { }
And finally, a controller that uses the SessionModel
constructor’s optional delaySession
parameter. This parameter allows us to test RenderAction
‘s Session
behavior if the host controller doesn’t use Session
:
public class ExceptionController : HomeController { public override ActionResult Index() { return View(new SessionModel(this, true)); } }
So what do we find? Well the short answer is that the host controller’s SessionStateBehavior
takes precedence. In the case of Home
, Default
, ReadOnly
, and Required
, we have access to Session
information in all rendered actions:
If the host controller is marked with SessionStateBehavior.Disabled
, all the rendered actions see Session
as null
:
I see this is the key finding to remember: an action that depends on Session
, even if its controller is marked with SessionStateBehavior.Required
, will be in for a nasty NullRef surprise if it’s rendered by controller without. It would be nice if the framework either gave some sort of warning about this, or if they used a Null Object pattern instead of just letting Session
return null
.
Finally, things get really weird if a Session
-dependent action is rendered from a host controller that doesn’t reference Session
, even if SessionState
is enabled:
It’s pretty clear the issue has something to do with where RenderAction()
happens in the request lifecycle, but it’s unclear how to resolve it short of accessing Session
in the host controller.
So there we have it…a comprehensive testing of sessionless controllers and RenderAction
for the ASP.NET MVC 3 Release Candidate. Hopefully the inconsistencies of the latter two cases will be resolved or at least documented before RTM.
One of my favorite developer events of 2009 was St. Louis Day of .NET. Not only were the facilities (Ameristar Casino) top-notch, but there were a ton of great sessions and I got to pick the brains of some really sharp people. This year’s event looks to be even better, with a huge variety of sessions on principles, practices and plenty of programming. I will be presenting two sessions:
Dynamic .NET has gone mainstream with the recent promotion of the Dynamic Language Runtime into .NET 4. This session will discuss what the DLR is, how it works with C# 4 and Visual Basic 10, and why this doesn’t mean C# has jumped the shark. We will also look at some ways in which these features can be used to solve real-world problems.
System.Interactive is a library distributed with Microsoft’s Reactive Extensions, currently available on DevLabs, which provides a number of useful extensions to the LINQ Standard Query Operators. These extensions include operators to add and contain side effects, handle exceptions, generate and combine sequences, and much more. This session will review the new operators and discuss interesting problems they can be used to solve. Note that Rx is available for .NET 3.5 SP1, Silverlight 3 and .NET 4.0, so this session is not just for those developing on the bleeding edge.
The organizers were kind enough to provide speakers with some discount codes, so I figured this is as good a place as any to give those out. Two lucky commenters will get a code worth $75 off the cover price, with the grand prize being free admission. All you have to do is leave a comment (with a valid e-mail address) convincing me that you deserve these rich rewards over my other suitors. And if your reasons are all terrible, I’ll ask random.org. Deadline is 23:59 CDT on Monday, July 26th.
Hope to see you there!
A common struggle with unit testing is figuring when to just assume somebody else’s code works. One such example is serializability: for simple classes, it should “just work” so we shouldn’t need to write a unit test for each of them. However, I still wanted to be able to verify that all classes in certain namespaces were marked as [Serializable]
, so I wrote the following test:
[TestCase(typeof(Money), "Solutionizing.Domain")] [TestCase(typeof(App), "Solutionizing.Web.Models")] public void Types_should_be_Serializable(Type sampleType, string @namespace) { var assembly = sampleType.Assembly; var unserializableTypes = ( from t in assembly.GetTypes() where t.Namespace != null && t.Namespace.StartsWith(@namespace, StringComparison.Ordinal) where !t.IsSerializable && ShouldBeSerializable(t) select t ).ToArray(); unserializableTypes.ShouldBeEmpty(); }
After we have a reference to the Assembly
under test, we use a LINQ to Objects query against its types. If a type matches our namespace filter, we make sure it’s serializable if it should be. Finally, by using ToArray()
and ShouldBeEmpty()
we’re given a nice error message if the test fails:
TestCase 'Solutionizing.Tests.SerializabilityTests.Types_should_be_Serializable(Solutionizing.Domain.Money, Solutionizing.Domain)' failed: Expected: <empty> But was: < <Solutionizing.Domain.Oops>, <Solutionizing.Domain.OopsAgain> > SerializabilityTests.cs(29,0): at Solutionizing.Tests.SerializabilityTests.Types_should_be_Serializable(Type sampleType, String namespace)
I use a few criteria to determine if I expect the type to be serializable:
private bool ShouldBeSerializable(Type t) { if (IsExempt(t)) return false; if (t.IsAbstract && t.IsSealed) // Static class return false; if (t.IsInterface) return false; if (!t.IsPublic) return false; return true; }
Other than IsExempt()
, the code should be more or less self-explanatory. If you had never bothered to check how static classes are represented in IL, now you know: abstract (can’t be instantiated) + sealed (can’t be inherited). Also, note that !IsPublic
will cover compiler-generated classes for iterators and closures that we don’t need to serialize.
The final piece is providing a way we can exempt certain classes from being tested:
private bool IsExempt(Type t) { return exemptTypes.Any(e => e.IsAssignableFrom(t)); } private Type[] exemptTypes = new [] { typeof(SomeClassWithDictionary), // Wrapped dictionary is not serializable typeof(Attribute) // Metadata are never serialized };
Of course, this isn’t a replacement for actually testing that custom serialization works correctly for more complicated objects, particularly if your classes may depend on others that aren’t covered by these tests. But I have still found this test to be a useful first level of protection.
I’ve written hundreds of tests, read dozens of articles and listened to several presentations on unit testing, but until recently had never actually read a book dedicated to the subject. In reviewing my options, I was told repeatedly that I should start with Pragmatic Unit Testing (In C# with NUnit) from The Pragmatic Programmers, part of the three-volume Pragmatic Starter Kit. In the context of that starter kit, I found the book to be an excellent introduction to unit testing; however, a developer with sufficient experience could probably get by with a quick glance over the summary provided as Appendix C (which is available online).
But before I get into the book, let me start by applauding the idea of the Pragmatic Starter Kit. As I entered industry after receiving my degrees in Computer Engineering and Computer Science, it became clear that I was terribly unprepared for building quality software. Academia provided a solid foundation of theory and some (basic) techniques to structure code (OO, FP, etc), but provided precious little guidance for scaling projects beyond a few thousands lines of code. Version control was given one lecture and a trivial assignment (in CVS), the unit testing lecture did little to convince me that it actually had value, and automated testing was never even mentioned (in fact, build processes in general were scarcely discussed). These are the gaps that the Pragmatic Starter Kit aims to fill with practical advice from the field, and if Pragmatic Unit Testing is any indication the entire series should be required reading for new graduates (or even sophomores, really).
As one would expect from an introductory volume, the book begins with an excellent overview (PDF) of what unit testing is and why it matters. There are also several pages dedicated to rebuttals to common objections like “It takes too much time to write the tests”, “It’s not my job to test my code”, and my personal favorite “I’m being paid to write code, not to write tests”, which is answered brilliantly:
By that same logic, we’re not being paid to spend all day in the debugger either. Presumably we are being paid to write working code, and unit tests are merely a tool toward that end, in the same fashion as an editor, an IDE, or the compiler.
Developers are a proud lot, so the emphasis on testing as a powerful tool rather than a crutch is crucial.
Chapters 2 and 3 follow up with an introduction to testing with NUnit, first with a simple example and then with a closer look at structured testing with the framework. All the usual suspects are covered, including classic and constraint-based asserts, setup and teardown guidance, [Category], [ExpectedException], [Ignore] and more.
The most valuable chapters to a new tester will be chapters 4 and 5. The former provides the “Right BICEP” mnemonic to suggest what to test; the latter takes a closer look at the “CORRECT” boundary conditions (the B in BICEP) to test. The expanded acronyms are included in the aforementioned summary card (PDF). Even after you have a good handle on what to test, the mnemonics can still serve as a handy reminder, and starting out the overviews of each bullet are spot on. I also liked chapters 7–9, which give good guidance on qualities of good tests and how testing can be applied effectively to projects and to improve code, though the refactoring example was a bit longer than it probably needed to be.
In my opinion, the weakest parts of the book were chapters 6 and 10, on mocking and UI testing, respectively. The former starts out strong, but gets bogged down once it starts talking about tools. The reader would be better off skipping section 6.3 altogether in favor of a good Rhino Mocks or Moq introduction. The discussion of UI testing, on the other hand, covers too little on a number of topics to be of much value other than to raise awareness that you should test all parts of the application.
Overall I was quite pleased with the quantity and quality of material covered for an introductory volume, awarding four out of five donkeys. The authors make a good argument for testing and offer sound guidance for how to do it. However, if you’re already familiar with unit testing you may be better off reading The Art of Unit Testing or finding more specific material online.
It’s no secret that I’m a fan of using extension methods to make code more concise and expressive. This is particularly handy for enhancing APIs outside of your control, from the base class library to ASP.NET MVC and SharePoint. However, there are certain situations where it might be useful to use extension methods even though you have the option to add those methods to the class or interface itself. Consider this simplified caching interface:
public interface ICacheProvider { T Get<T>(string key); void Insert<T>(string key, T value); }
And a simple application of the decorator pattern to implement a cached repository:
public class CachedAwesomeRepository : IAwesomeRepository { private readonly IAwesomeRepository awesomeRepository; private readonly ICacheProvider cacheProvider; public CachedAwesomeRepository(IAwesomeRepository awesomeRepository, ICacheProvider cacheProvider) { this.awesomeRepository = awesomeRepository; this.cacheProvider = cacheProvider; } public Awesome GetAwesome(string id) { var awesome = cacheProvider.Get<Awesome>(id); if(awesome == null) cacheProvider.Insert(id, (awesome = awesomeRepository.GetAwesome(id))); return awesome; } }
So far, so good. However, as caching is used more often it becomes clear that there’s a common pattern that we might want to extract:
T ICacheProvider.GetOrInsert<T>(string key, Func<T> valueFactory) { T value = Get<T>(key); if(value == default(T)) Insert(key, (value = valueFactory())); return value; }
Which would reduce GetAwesome()
to a single, simple expression:
public Awesome GetAwesome(string id) { return cacheProvider.GetOrInsert(id, () => awesomeRepository.GetAwesome(id)); }
Now I just need to decide where GetOrInsert()
lives. Since I control ICacheProvider
, I could just add another method to the interface and update all its implementers. However, after starting down this path, I concluded this was not desirable for a number of reasons:
So instead I have a handy GetOrInsert()
extension method (conversion is left as an exercise for the reader) that I can use to clean up my caching code without needing to change any of my cache providers or tests for existing consumers.
The question is really analogous to whether or not Select()
and Where()
should be part of IEnumerable<T>
. They are certainly useful ways to consume the interface, just as GetOrInsert()
is, but they exist outside of what an IEnumerable<T>
really is.
When you’re dealing with a system like SharePoint that returns most data as strings, it’s common to want to parse the data back into a useful numeric format. The .NET framework offers several options to achieve this, namely the static methods on System.Convert
and the static Parse()
methods on the various value types. However, these are limited in that they turn null
string values into the default for the given type (0, false, etc) and they throw exceptions to indicate failure, which might be a performance concern.
Often, a better option is to use the static TryParse()
method provided by most value types (with the notable exception of enumerations). These follow the common pattern of returning a boolean to indicate success and using an out
parameter to return the value. While much better suited for what we’re trying to achieve, the TryParse
pattern requires more plumbing than I care to see most of the time—I just want the value. To that end, I put together a simple extension method to encapsulate the pattern:
public delegate bool TryParser<T>(string value, out T result) where T : struct; public static T? ParseWith<T>(this string value, TryParser<T> parser) where T : struct { T result; if (parser(value, out result)) return result; return null; }
The struct
constraint on T
is required to align with the constraint on the Nullable<T>
returned by the method.
We can now greatly simplify our efforts to parse nullable values:
var myIntPropStr = properties.BeforeProperties["MyIntProp"] as string; var myIntProp = myPropStr.ParseWith<int>(int.TryParse); if(myIntProp == null) throw new Exception("MyIntProp is empty!");
One quirk of this technique is that Visual Studio usually cannot infer T
from just the TryParse
method because of its multiple overloads. One option would be to write a dedicated method for each value type, but I would view this as unnecessary cluttering of the string
type. Your mileage may vary.
Dynamic LINQ (DLINQ) is a LINQ extension provided in the VS 2008 Samples. Scott Guthrie provides a good overview here: Dynamic LINQ (Part 1: Using the LINQ Dynamic Query Library), but the executive summary is that it implements certain query operations on IQueryable
(the non-generic variety), with filtering, grouping and projection specified with strings rather than statically-typed expressions.
I’ve never had a use for it, but a question on Stack Overflow caused me to take a second look…
…the selected groupbyvalue (Group) will always be a string, and the sum will always be a double, so I want to be able to cast into something like a List, where Result is an object with properties Group (string) and TotalValue (double).
Before we can solve the problem, let’s take a closer look at why it is being asked…
We can use the simplest of dynamic queries to explore a bit:
[Test] public void DLINQ_IdentityProjection_ReturnsDynamicClass() { IQueryable nums = Enumerable.Range(1, 5).AsQueryable(); IQueryable q = nums.Select("new (it as Value)"); Type elementType = q.ElementType; Assert.AreEqual("DynamicClass1", elementType.Name); CollectionAssert.AreEqual(new[] { typeof(int) }, elementType.GetProperties().Select(p => p.PropertyType).ToArray()); }
DLINQ defines a special expression syntax for projection that is used to specify what values should be returned and how. it
refers to the current element, which in our case is an int
.
The result in question comes from DynamicQueryable.Select()
:
public static IQueryable Select(this IQueryable source, string selector, params object[] values) { LambdaExpression lambda = DynamicExpression.ParseLambda(source.ElementType, null, selector, values); return source.Provider.CreateQuery( Expression.Call( typeof(Queryable), "Select", new Type[] { source.ElementType, lambda.Body.Type }, source.Expression, Expression.Quote(lambda))); }
The non-generic return type suggest that the type of the values returned is unknown at compile time. If we check an element’s type at runtime, we’ll see something like DynamicClass1
. Tracing down the stack from DynamicExpression.ParseLambda()
, we eventually find that DynamicClass1
is generated by a call to DynamicExpression.CreateClass()
in ExpressionParser.ParseNew()
. CreateClass()
in turn delegates to a static ClassFactory
which manages a dynamic assembly in the current AppDomain
to hold the new classes, each generated by Reflection.Emit
. The resulting type is then used to generate the MemberInit
expression that constructs the object.
While dynamic objects are useful in some situations (thus support in C# 4), in this case we want to use static typing. Let’s specify our result type with a generic method:
IQueryable<TResult> Select<TResult>(this IQueryable source, string selector, params object[] values);
We just need a mechanism to insert our result type into DLINQ to supersede the dynamic result. This is surprisingly easy to implement, as ParseLambda()
already accepts a resultType
argument. We just need to capture it…
private Type resultType; public Expression Parse(Type resultType) { this.resultType = resultType; int exprPos = token.pos; // ...
…and then update ParseNew()
to use the specified type:
Expression ParseNew() { // ... NextToken(); Type type = this.resultType ?? DynamicExpression.CreateClass(properties); MemberBinding[] bindings = new MemberBinding[properties.Count]; for (int i = 0; i < bindings.Length; i++) bindings[i] = Expression.Bind(type.GetProperty(properties[i].Name), expressions[i]); return Expression.MemberInit(Expression.New(type), bindings); }
If resultType
is null
, as it is in the existing Select()
implementation, a DynamicClass
is used instead.
The generic Select<TResult>
is then completed by referencing TResult
as appropriate:
public static IQueryable<TResult> Select<TResult>(this IQueryable source, string selector, params object[] values) { LambdaExpression lambda = DynamicExpression.ParseLambda(source.ElementType, typeof(TResult), selector, values); return source.Provider.CreateQuery<TResult>( Expression.Call( typeof(Queryable), "Select", new Type[] { source.ElementType, typeof(TResult) }, source.Expression, Expression.Quote(lambda))); }
With the following usage:
public class ValueClass { public int Value { get; set; } } [Test] public void DLINQ_IdentityProjection_ReturnsStaticClass() { IQueryable nums = Enumerable.Range(1, 5).AsQueryable(); IQueryable<ValueClass> q = nums.Select<ValueClass>("new (it as Value)"); Type elementType = q.ElementType; Assert.AreEqual("ValueClass", elementType.Name); CollectionAssert.AreEqual(nums.ToArray(), q.Select(v => v.Value).ToArray()); }
Note that the property names in TResult
must match those in the Select
query exactly. Changing the query to “new (it as value)” results in an unhandled ArgumentNullException
in the Expression.Bind()
call seen in the for loop of ParseNew()
above, as the “value” property cannot be found.
So we can select dynamic types or existing named types, but what if we want to have the benefits of static typing without having to declare a dedicated ValueClass
, as we can with anonymous types and normal static LINQ? As a variation on techniques used elsewhere, let’s can define an overload of Select()
that accepts an instance of the anonymous type whose values we will ignore but using its type to infer the desired return type. The overload is trivial:
public static IQueryable<TResult> Select<TResult>(this IQueryable source, TResult template, string selector, params object[] values) { return source.Select<TResult>(selector, values); }
With usage looking like this (note the required switch to var q
):
[Test] public void DLINQ_IdentityProjection_ReturnsStaticClass() { IQueryable nums = Enumerable.Range(1, 5).AsQueryable(); var q = nums.Select(new { Value = 0 }, "new (it as Value)"); Type elementType = q.ElementType; Assert.IsTrue(elementType.Name.Contains("AnonymousType")); CollectionAssert.AreEqual(nums.ToArray(), q.Select(v => v.Value).ToArray()); }
However, if we try the above we encounter an unfortunate error:
The property ‘Int32 Value’ has no ‘set’ accessor
As you may or may not know, anonymous types in C# are immutable (modulo changes to objects they reference), with their values set through a compiler-generated constructor. (I’m not sure if this is true in VB.) With this knowledge in hand, we can update ParseNew()
to check if resultType
has such a constructor that we could use instead:
// ... Type type = this.resultType ?? DynamicExpression.CreateClass(properties); var propertyTypes = type.GetProperties().Select(p => p.PropertyType).ToArray(); var ctor = type.GetConstructor(propertyTypes); if (ctor != null) return Expression.New(ctor, expressions); MemberBinding[] bindings = new MemberBinding[properties.Count]; for (int i = 0; i < bindings.Length; i++) bindings[i] = Expression.Bind(type.GetProperty(properties[i].Name), expressions[i]); return Expression.MemberInit(Expression.New(type), bindings); }
And with that we can now project from a dynamic query onto static types, both named and anonymous, with a reasonably natural interface.
Due to licensing I can’t post the full example, but if you’re at all curious about Reflection.Emit or how DLINQ works I would encourage you to dive in and let us know what else you come up with. Things will get even more interesting with the combination of LINQ, the DLR and C# 4’s dynamic in the coming months.
One of the biggest surprises when I started working with WatiN was the omission of a mechanism to check for error conditions. A partial solution using a subclass has been posted before, but it doesn’t quite cover all the bases. Specifically, it’s missing a mechanism to attach existing Internet Explorer instances to objects of the enhanced subtype. Depending on the site under test’s use of pop-ups, this could be a rather severe limitation. So let’s see how we can fix it.
As WatiN is open source, one option is to just patch the existing implementation to include the desired behavior. I’ve uploaded a patch with tests here, but the gist of the patch is quite similar to the solution referenced above:
protected void AttachEventHandlers() { ie.BeforeNavigate2 += (object pDisp, ref object URL, ref object Flags, ref object TargetFrameName, ref object PostData, ref object Headers, ref bool Cancel) => { ErrorCode = null; }; ie.NavigateError += (object pDisp, ref object URL, ref object Frame, ref object StatusCode, ref bool Cancel) => { ErrorCode = (HttpStatusCode)StatusCode; }; } /// <summary> /// HTTP Status Code of last error, or null if the last request was successful /// </summary> public HttpStatusCode? ErrorCode { get; private set; }
Before every request we clear out the error code, with errors captured as an enum value borrowed from System.Net.
We complete the patch by placing calls to our AttachEventHandlers()
method in two places:
At this point we can now assert success:
using (IE ie = new IE("https://solutionizing.net/")) { Assert.That(ie.ErrorCode, Is.Null); }
Or specific kinds of failure:
using (IE ie = new IE("https://solutionizing.net/4040404040404")) { Assert.That(ie.ErrorCode, Is.EqualTo(HttpStatusCode.NotFound)); }
See the patch above for a more complete set of example tests.
It’s wonderful that we have the option to make our own patched build with the desired behavior, but what if we would rather use the binary distribution? Well through the magic of inheritance we can get most of the way there pretty easily:
public class MyIE : IE { public MyIE() { Initialize(); } public MyIE(object shDocVwInternetExplorer) : base(shDocVwInternetExplorer) { Initialize(); } public MyIE(string url) : base(url) { Initialize(); } // Remaining c'tors left as an exercise // Property named ie for consistency with the private field in the parent protected InternetExplorer ie { get { return (InternetExplorer)InternetExplorer; } } protected void Initialize() { AttachEventHandlers(); } // AttachEventHandlers() and ErrorCode as defined above }
But as I suggested before, this is where we run into a bit of a snag. The IE
class also provides a set of static AttachToIE()
methods that, as their name suggests, return an IE
object for an existing Internet Explorer window. These static methods have the downside that they are hard-coded to return objects of type IE
, not our enhanced MyIE
type. And because all the relevant helper methods are private and not designed for reuse, we have no choice but to pull them into our subclass in their entirety:
public new static MyIE AttachToIE(BaseConstraint findBy) { return findIE(findBy, Settings.AttachToIETimeOut, true); } public new static MyIE AttachToIE(BaseConstraint findBy, int timeout) { return findIE(findBy, timeout, true); } public new static MyIE AttachToIENoWait(BaseConstraint findBy) { return findIE(findBy, Settings.AttachToIETimeOut, false); } public new static MyIE AttachToIENoWait(BaseConstraint findBy, int timeout) { return findIE(findBy, timeout, false); } private static MyIE findIE(BaseConstraint findBy, int timeout, bool waitForComplete) { SHDocVw.InternetExplorer internetExplorer = findInternetExplorer(findBy, timeout); if (internetExplorer != null) { MyIE ie = new MyIE(internetExplorer); if (waitForComplete) { ie.WaitForComplete(); } return ie; } throw new IENotFoundException(findBy.ConstraintToString(), timeout); } protected static SHDocVw.InternetExplorer findInternetExplorer(BaseConstraint findBy, int timeout) { Logger.LogAction("Busy finding Internet Explorer matching constriant " + findBy.ConstraintToString()); SimpleTimer timeoutTimer = new SimpleTimer(timeout); do { Thread.Sleep(500); SHDocVw.InternetExplorer internetExplorer = findInternetExplorer(findBy); if (internetExplorer != null) { return internetExplorer; } } while (!timeoutTimer.Elapsed); return null; } private static SHDocVw.InternetExplorer findInternetExplorer(BaseConstraint findBy) { ShellWindows allBrowsers = new ShellWindows(); int browserCount = allBrowsers.Count; int browserCounter = 0; IEAttributeBag attributeBag = new IEAttributeBag(); while (browserCounter < browserCount) { attributeBag.InternetExplorer = (SHDocVw.InternetExplorer) allBrowsers.Item(browserCounter); if (findBy.Compare(attributeBag)) { return attributeBag.InternetExplorer; } browserCounter++; } return null; }
The original version of the first findInternetExplorer()
is private. Were it protected instead, we would only have had to implement our own findIE()
to wrap the found InternetExplorer
object in our subtype.
I won’t go so far as to say private methods are a code smell, but they certainly can make the O in OCP more difficult to achieve.
So there you have it, two different techniques for accessing HTTP error codes in WatiN 1.3. At some point I’ll look at adding similar functionality to 2.0, if it’s not already there. And if someone on the project team see this, feel free to run with it.
One of the easiest ways to improve web site performance is to enable HTTP compression (often referred to as GZIP compression), which trades CPU time to compress content for a reduced payload delivered over the wire. In the vast majority of cases, the trade-off is a good one.
When implementing HTTP compression, your content will break down into three categories:
Excluding already-compressed content will need to be considered regardless of the techniques used to compress categories 2 and 3.
Since version 5, IIS has included support for both kinds of HTTP compression. This can be enabled through the management interface, but you will almost certainly want to tweak the default configuration in the metabase (see script below). While IIS works great for compressing static files, its extension-based configuration is rather limited when serving up dynamic content, especially if you don’t use extensions (as with most ASP.NET MVC routes) or you serve dynamic content that should not be compressed. A better solution is provided in HttpCompress by Ben Lowery, a configurable HttpModule that allows content to be excluded from compression by MIME type. A standard configuration might look something like this:
<configuration> ... <blowery.web> <httpCompress preferredAlgorithm="gzip" compressionLevel="normal"> <excludedMimeTypes> <add type="image/jpeg" /> <add type="image/png" /> <add type="image/gif" /> <add type="application/pdf" /> </excludedMimeTypes> <excludedPaths></excludedPaths> </httpCompress> </blowery.web> ... </configuration>
To supplement the compressed dynamic content, you should also enable static compression for the rest of your not-already-compressed content. The script should be pretty self-explanatory, but I’ll draw attention to a few things:
If you have anything else to add, or have problems with the script, please let me know.
@echo off set adsutil=C:\Inetpub\AdminScripts\adsutil.vbs set tcfpath=%windir%\IIS Temporary Compressed Files set extlist=css htm html js txt xml mkdir "%tcfpath%" echo Ensure IIS_WPG has Full Control on %tcfpath% explorer "%tcfpath%\.." pause cscript.exe %adsutil% set w3svc/Filters/Compression/Parameters/HcDoStaticCompression true cscript.exe %adsutil% set w3svc/Filters/Compression/Parameters/HcCompressionDirectory "%tcfpath%" cscript.exe %adsutil% set w3svc/Filters/Compression/DEFLATE/HcFileExtensions %extlist% cscript.exe %adsutil% set w3svc/Filters/Compression/GZIP/HcFileExtensions %extlist% echo Restart IIS Admin Service - IISRESET does not seem to work pause echo Close Services to continue... Services.msc cscript.exe %adsutil% get w3svc/Filters/Compression/Parameters/HcDoStaticCompression echo Should be True -----------------------------^^ pause