SPWeb.AssociatedGroups.Contains Lies

While working on SPExLib (several months ago), I revisited this post, which presented a functional approach to a solution Adam describes here. Both posts include logic to add an SPWeb group association, which most simply could look something like this:

SPGroup group = web.SiteGroups[groupName];
if (!web.AssociatedGroups.Contains(group))
{
    web.AssociatedGroups.Add(group);
    web.Update();
}

While testing on a few groups, I noticed that the Contains() call lies, always returning false. This behavior can also be verified with PowerShell:

PS > $w.AssociatedGroups | ?{ $_.Name -eq 'Designers' } | select Name

Name
----
Designers

PS > $g = $w.SiteGroups['Designers']
PS > $w.AssociatedGroups.Contains($g)
False

Of course, it’s not actually lying—it just doesn’t do what we expect. Behind the scenes, AssociatedGroups  is implemented as a simple List<SPGroup> that is populated with group objects retrieved by IDs stored in the SPWeb‘s vti_associategroups property. The problem is that List<T>.Contains() uses EqualityComparer<T>.Default to find a suitable match, which defaults to reference equality for reference types like SPGroup that don’t implement IEquatable<T> or override Equals().

To get around this, SPExLib provides a few extension methods to make group collections and SPWeb.AssociatedGroups easier to work with and more closely obey the Principle of Least Surprise:

public static bool NameEquals(this SPGroup group, string name)
{
    return string.Equals(group.Name, name, StringComparison.OrdinalIgnoreCase);
}

public static bool Contains(this SPGroupCollection groups, string name)
{
    return groups.Any<SPGroup>(group => group.NameEquals(name));
}

public static bool HasGroupAssociation(this SPWeb web, string name)
{
    return web.AssociatedGroups.Contains(name);
}

public static bool HasGroupAssociation(this SPWeb web, SPGroup group)
{
    if (group == null)
        throw new ArgumentNullException("group");
    return web.HasGroupAssociation(group.Name);
}

public static void EnsureGroupAssociation(this SPWeb web, SPGroup group)
{
    if (web.HasGroupAssociation(group))
        web.AssociatedGroups.Add(group);
}

The code should be pretty self-explanatory. The name comparison logic in NameEquals() is written to align with how SharePoint compares group names internally, though they use their own implementation of case insensitivity because the framework’s isn’t good enough. Or something like that.

There should be two lessons here:

  1. Don’t assume methods that have a notion of equality, like Contains(), will behave like you expect.
  2. Use SPExLib and contribute other extensions and helpers you find useful. :)
Posted in Object Model, SharePoint. Tags: , . Comments Off on SPWeb.AssociatedGroups.Contains Lies

Using IDisposables with LINQ

Objects that implement IDisposable are everywhere. The interface even gets its own language features (C#, VB, F#). However, LINQ throws a few wrenches into things:

  1. LINQ’s query syntax depends on expressions; using blocks are statements.
  2. When querying a sequence of IDisposable objects, there’s no easy way to ensure disposal after each element has been consumed.
  3. Returning deferred queries from within a using statement is often desired, but fails spectacularly.

There are possible work-arounds for each issue…

  1. Put the using statement in a method (named or anonymous) that is called from the query. See also: Thinking Functional: Using.
  2. Use a method that creates a dispose-safe iterator of the sequence, like AsSafeEnumerable().
  3. Refactor the method to inject the IDisposable dependency, as shown in the first part of Marc’s answer here.

But, as you might have guessed, I would like to propose a better solution. The code is really complex, so bear with me:

public static IEnumerable<T> Use<T>(this T obj) where T : IDisposable
{
    try
    {
        yield return obj;
    }
    finally
    {
        if (obj != null)
            obj.Dispose();
    }
}

That’s it. We’re turning our IDisposable object into a single-element sequence. The trick is that the C# compiler will build an iterator for us that properly handles the finally clause, ensuring that our object will be disposed. It might be helpful to set a breakpoint on the finally clause to get a better idea what’s happening.

So how can this simple method solve all our problems? First up: “using” a FileStream object created in a LINQ query:

var lengths = from path in myFiles
              from fs in File.OpenRead(path).Use()
              select new { path, fs.Length };

Since the result of Use() is a single-element sequence, we can think of from fs in something.Use() as an assignment of that single value, something, to fs. In fact, it’s really quite similar to an F# use binding in that it will automatically clean itself up when it goes out of scope (by its enumerator calling MoveNext()).

Next, disposing elements from a collection. I’ll use the same SharePoint problem that AsSafeEnumerable() solves:

var webs = from notDisposed in site.AllWebs
           from web in notDisposed.Use()
           select web.Title;

I find this syntax rather clumsy compared with AsSafeEnumerable(), but it’s there if you need it.

Finally, let’s defer disposal of a LINQ to SQL DataContext until after the deferred query is executed, as an answer to the previously-linked Stack Overflow question:

IQueryable<MyType> MyFunc(string myValue)
{
    return from dc in new MyDataContext().Use()
           from row in dc.MyTable
           where row.MyField == myValue
           select row;
}

void UsingFunc()
{
    var result = MyFunc("MyValue").OrderBy(row => row.SortOrder);
    foreach(var row in result)
    {
        //Do something
    }
}

The result of MyFunc now owns its destiny completely. It doesn’t depend on some potentially disposed DataContext – it just creates one that it will dispose when it’s done. There are probably situations where you would want to share a DataContext rather than create one on demand (I don’t use LINQ to SQL, I just blog about it), but again it’s there if you need it.

I’ve only started using this approach recently, so if you have any problems with it please share.

Stylish Gears: Customizing SPLongOperation

A frequently-asked branding question is how to customize the “gear” page shown during an SPLongOperation. The short answer is “you can’t”; but that’s not entirely true. The operation can be slightly customized using its LeadingHTML and TrailingHTML properties, which are written directly to the response stream. Because they aren’t encoded, we can use one to inject some JavaScript into the page that can manipulate the DOM, insert a stylesheet, really do anything we want. It’s not an ideal solution, as there will be a brief moment where the page is shown in its original form before the script can execute, but I believe it’s about the best we can do without directly modifying 12\TEMPLATE\LAYOUTS\gear.aspx. SharePoint's Gear of War

As a quick proof of concept, here’s the source of an .aspx you can place in LAYOUTS to see a long operation page that uses the current theme:

<%@ Page Language="C#" %>
<%@ Import Namespace="Microsoft.SharePoint" %>

<script language="C#" runat="server">
  protected override void OnLoad(EventArgs e)
  {
    string themeUrl = SPContext.GetContext(this.Context).Web.ThemeCssUrl;
    using (SPLongOperation op = new SPLongOperation(this.Page))
    {
      StringBuilder sb = new StringBuilder();
      sb.Append(themeUrl+"</span> \n");
      sb.Append("    <script language=\"javascript\"> \n");
      sb.Append("      (function(){ \n");
      sb.Append("        var objHead = document.getElementsByTagName('head')[0]; \n");
      sb.Append("        if(objHead) { \n");
      sb.Append("          var objTheme = objHead.appendChild(document.createElement('link')); \n");
      sb.Append("          objTheme.rel = 'stylesheet'; \n");
      sb.AppendFormat("          objTheme.href = '{0}'; \n", themeUrl);
      sb.Append("          objTheme.type = 'text/css'; \n");
      sb.Append("        } \n");
      sb.Append("      })();");
      sb.Append("\n<"+"/script><span>");
      op.TrailingHTML = sb.ToString();
      op.Begin();

      System.Threading.Thread.Sleep(10000);
    }
  }
</script>

I use a StringBuilder because embedded scripts don’t play well with multi-line constants. In production, I would probably embed the script as a resource.

Here’s the result with the built-in Granite theme:
Granite-Themed SPLongOperation

SPExLib Release: These Are A Few Of My Favorite Things

It’s no secret that I’m a big fan of using extension methods to simplify work with the SharePoint object model. Wictor Wilén has allowed me to incorporate many of my “greatest hits” (and some new techniques! more on those in coming weeks) into his excellent SharePoint Extensions Lib project, which added a new release over the weekend (also see Wictor’s announcement).

It’s also worth pointing out that this isn’t just a library of extension methods. It also includes some useful base controls and auxilary classes, including SPDisposeCheckIgnoreAttribute and SPDisposeCheckID with IntelliSense support. If you have classes or methods that you simply can’t live without, we’d love to incorporate them as well.

Additional reading on some of the extensions included:

SPExLib Features

  • Namespace: SPExLib.General
    • Extensions to the .NET 3.5 SP1 Fx
  • Namespace: SPExLib.SharePoint
    • Extensions to the SharePoint object model.
  • Namespace: SPExLib.SharePoint.Linq
    • Linq extensions for the SharePoint object model. Including Linq operations on SPWeb/SPSiteCollection using dispose-safe-methods.
  • Namespace: SPExLib.SharePoint.Linq.Base
    • Implementation of IEnumerable<T> on the SPBaseCollection, which Linq-enables all collections in the SharePoint object model.
  • Namespace: SPExLib.SharePoint.Security
    • Extension methods that simplifies impersonation tasks on SPSite and SPWeb objects
  • Namespace: SPExLib.SharePoint.Tools
    • SPDispose checker utilities
  • Namespace: SPExLib.Diagnostics
    • Debug and Trace features
  • Namespace: SPExLib.Controls
    • Template classes for WebParts and EditorParts

Check it out!

SPWebConfigModification Works Fine

Manpreet Alag‘s recent post, SPWebConfigModification does not work on Farms with multiple WFEs, has been making its rounds on Twitter and the link blogs. A post title like that is sure to get attention, but is it really true? After looking a bit closer, I don’t believe it is.

The post suggests that this doesn’t work:

SPSite siteCollection = new SPSite("http://MOSSServer/");
SPWebApplication webApp = siteCollection.WebApplication;
// ...
webApp.WebConfigModifications.Add(modification);
webApp.Farm.Services.GetValue<SPWebService>().ApplyWebConfigModifications();

But this does:

SPWebService.ContentService.WebConfigModifications.Add(modification);
SPWebService.ContentService.Update();
SPWebService.ContentService.ApplyWebConfigModifications();

Drawing this final conclusion:

Instead of adding modifications to WebConfigModifcations of SPWebApplication object, we are using SPWebService.ContentService to call ADD and UPDATE methods. Whenever required, it is always advised to use SPWebService.ContentService to make the modifications rather than accessing Farm instance through SPWebApplication.

The suggestion is that there’s a problem with applying the changes through webApp.Farm. But that Farm is just SPFarm.Local:

public SPSite(string requestUrl) : this(SPFarm.Local, new Uri(requestUrl), false, SPSecurity.UserToken)
{
}

So the last line is essentially equivalent to this:

SPFarm.Local.Services.GetValue<SPWebService>()

Taking a peek at ContentService, we find this definition:

public static SPWebService get_ContentService
{
    if (SPFarm.Local != null)
    {
        return SPFarm.Local.Services.GetValue<SPWebService>();
    }
    return null;
}

The modified sample isn’t actually doing anything different to apply the changes! So the problem is either in how SharePoint handles Web Application-scoped web config changes, or that the changes aren’t being applied correctly. The latter is much more likely than the former, and indeed the solution is actually quite simple: just look for the only other significant difference between the code samples.

webApp.WebConfigModifications.Add(modification);
webApp.Update(); // Oops!
webApp.Farm.Services.GetValue<SPWebService>().ApplyWebConfigModifications();

A quick PowerShell session or console app would have verified that the config changes weren’t being saved to the database.

So what have we learned?

  1. Always call Update() after making changes to an SPPersistedObject (like SPWebApplication or SPWebService).
  2. SPWebService.ContentService is a shortcut for SPFarm.Local.Services.GetValue<SPWebService>.
  3. Check your code carefully before blaming the SharePoint API!

Join SharePoint Lists with LINQ

I just read yet another post by Adam Buenz that got me thinking, this time about querying multiple SharePoint lists. Here’s the code he came up with:

var resultSet  = list1.Items.Cast<SPListItem>()
.Where(i => Equals (String.Compare(i["Property To Match #1"].ToString(), "Example String Literal"), 0))
.SelectMany(x => list2.Items.Cast<SPListItem>()
    .Where(i => Equals(String.Compare(new SPFieldLookupValue(x["Client"].ToString()).LookupValue, (string) i["Property To Match #2"]), 0)));

My first thought was that we could make it more readable with LINQ syntax:

var res = from SPListItem pi in list1.Items
          where pi["Property To Match #1"] as string == "Example String Literal"
          from SPListItem ci in list2.Items
          where new SPFieldLookupValue(ci["Client"] as string).LookupValue == pi["Property To Match #2"]
          select new { Parent = pi, Child = ci };

Behind the scenes, this will translate into equivalent extension method calls. The other adjustments are based on personal preference: ToString() can cause null reference exceptions, as string will not; and String.Compare() != String.Equals().

Next, let’s optimize the actual SharePoint queries. As a general rule we should always specify the desired ViewFields to eliminate unused data, and our first where clause should be handled with CAML if possible [see also, Is it a good idea to use lambda expressions for querying SharePoint data?].

var pItems = list1.GetItems(new SPQuery() {
    Query = "... ['Property To Match #1'] == 'Example String Literal'...",
    ViewFields = "..."
});
var cItems = list2.GetItems(new SPQuery() {
    ViewFields = "..."
});
var res = from SPListItem pi in pItems
          from SPListItem ci in cItems
          where new SPFieldLookupValue(ci["Client"] as string).LookupValue == pi["Property To Match #2"]
          select new { Parent = pi, Child = ci };

Now that we’re getting our data as efficiently as possible, we can look at what LINQ is doing with them. Behind the scenes, SelectMany is essentially implemented like this:

public static IEnumerable<TResult> SelectMany<TSource, TResult>(
    this IEnumerable<TSource> source,
    Func<TSource, IEnumerable<TResult>> selector)
{
    foreach(TSource element in source)
        foreach(TResult childElement in selector(element))
            yield return childElement;
}

For each item in our parent collection (source), the entire child collection is enumerated in search of items that match the predicate. This seems rather inefficient since we’re comparing the same values each time. Conveniently, LINQ provides a join operator for this purpose:

var res = from SPListItem pi in pItems
          join SPListItem ci in cItems
              on pi["Property To Match #2"]
              equals new SPFieldLookupValue(ci["Client"] as string).LookupValue
          select new { Parent = pi, Child = ci };

Behind the scenes, this translates into a call to the Join method:

var res = pItems.Cast<SPListItem>().Join(cItems.Cast<SPListItem>(),
              pi => pi["Property To Match #2"],
              ci => new SPFieldLookupValue(ci["Client"] as string).LookupValue,
              (pi, ci) => new { Parent = pi, Child = ci }
          );

Note that the left- and right-hand sides of the equals keyword are treated separately. The left-hand side operates on the first collection, the right-hand side operates on the second collection, and obviously both expressions must return the same type. This might be easier to see from an implementation of Join:

public static IEnumerable<TResult> Join<TOuter, TInner, TKey, TResult>(
    this IEnumerable<TOuter> outer,
    IEnumerable<TInner> inner,
    Func<TOuter, TKey> outerKeySelector,
    Func<TInner, TKey> innerKeySelector,
    Func<TOuter, TInner, TResult> resultSelector)
{
    ILookup<TKey, TInner> lookup = inner.ToLookup(innerKeySelector);
    return from outerItem in outer
           from innerItem in lookup[outerKeySelector(outerItem)]
           select resultSelector(outerItem, innerItem);
}

So in our case, Join will build a lookup of all child items based on the lookup value, and then perform a SelectMany to cross join the parent items with the child items found from a lookup by the matched property. This dictionary lookup will almost certainly perform better than a full enumeration of the list, especially for larger lists and more complex keys.

Elegant Inline Debug Tracing

As much fun as it is to step through code with a debugger, I usually prefer to use System.Diagnostics.Debug and Trace with DebugView to see what’s happening in realtime. This is particularly handy to track intermediate results in higher-order functions that you might not be able to step into. However, it’s not always convenient to insert debugging statements amongst the composed expressions of F#, PowerShell or LINQ.

An alternative first came to mind while working in F#:

let dbg x = System.Diagnostics.Debug.WriteLine(x |> sprintf "%A"); x

(Read |> as “as next parameter to”.) We can then use this function anywhere to peek at a value, perhaps an intermediate list in this trivial example:

let data = [1..10]
           |> List.filter (fun i -> i%3 = 0) |> dbg
           |> List.map (fun i -> i*i)

Indeed [3; 6; 9] are traced as multiples of three. Not a particularly convincing example, but it should be pretty easy to imagine a more complex algorithm for which unintrusive tracing would be useful.

This works pretty well with F#’s |> operator to push values forward, but what about C#? Given my posting history, it shouldn’t be hard to guess where I’m going with this…

Extension Methods

So if |> is “as next parameter to”, the . of an extension method call might read “as first parameter to”. So we can implement a roughly equivalent function (sans F#’s nice deep-print formatter "%A") like so:

    public static T Debug<T>(this T value)
    {
        Debug.WriteLine(value);
        return value;
    }

    public static T Dbg<T>(this T value, string category)
    {
        Debug.WriteLine(value, category);
        return value;
    }

I find the optional label handy to keep different traces separate. Looking again, there’s an overload that accepts a category, so we’ll use that instead. So why might this be useful? Maybe we want to log the value assigned within an object initializer:

var q = new SPQuery() {
  Query = GetMyQuery().Debug("Query")
};

Rather than store the query string to a temporary variable or retrieve the property after it’s been set, we can just trace the value inline. Or consider a LINQ example:

var items = from SPListItem item in list.GetItems(q)
            let url = new SPFieldUrlValue(item["URL"] as string)
            where url.Url.Debug("URL").StartsWith(baseUrl, StringComparison.OrdinalIgnoreCase)
            select new
            {
                Title = item.Title.Debug("Title"),
                Description = url.Description,
            };

Here we log all URLs that pass through, even the ones excluded from the result by the predicate. This would be much harder to implement efficiently without inline logging.

This technique works great for simple objects with a useful ToString(), but what about more complex objects? As has often been the answer lately, we can use higher-order functions:

    public static T Dbg<T, R>(this T value, Func<T, R> selector)
    {
        Debug.WriteLine(selector(value));
        return value;
    }

    public static T Dbg<T, R>(this T value, string category, Func<T, R> selector)
    {
        Debug.WriteLine(selector(value), category);
        return value;
    }

Now we can provide a delegate to trace whatever we want without affecting the object itself. For example, we can easily trace a row count for the DataView being returned:

public DataView GetResults()
{
    var myTable = GetDataTable();
    // Process data...
    return myTable.DefaultView.Dbg("Result Count", v => v.Count);
}

I could go on, but you get the idea.

PowerShell Filter

Finally, we can implement similar functionality in PowerShell using a filter with an optional scriptblock parameter:

filter Debug([scriptblock] $sb = { $_ })
{
  [Diagnostics.Debug]::WriteLine((& $sb))
  $_
}

PS > 1..3 | Debug { $_*2 } | %{ $_*$_ }
1
4
9

Which traces 2, 4, 6, as expected.

Update 4/19/2009: Changed functions to use category overloads. And another point to consider: if the value being traced could be null, selector should be designed accordingly to avoid NullReferenceException. There’s nothing worse than bugs introduced by tracing or logging.

SharePoint Disposal Wish List

I was almost off my SharePoint disposal kick when Jeremy Jameson had to pull me back in with this post. Resisting the temptation to rehash some stuff I’ve covered before, I thought it might be therapeutic to spend some time discussing what the dispose story should look like. I’m sure the API for WSS 4.0 is more or less locked at this point, but it still might be useful to frame how I think about ownership of these blasted IDisposables.

In my SharePoint fantasy world, the dispose story can be explained in four bullets:

  1. Use of a disposed object throws an exception; failures to dispose properly are logged.
  2. IDisposable references retrieved from an instance property are owned by the instance.
  3. IDisposable references retrieved from a constructor, method or collection indexer are owned by the developer and should always be disposed.
  4. IDisposable references from an enumerator are owned and disposed by the enumerator.

Let’s examine these a bit more closely.

1. Disposed = Invalid = Use is Exceptional; Discoverability = Good

The leaked object logging is already pretty good, but that’s more than made up for by the epic failure to actually close objects that have been closed. Fixing discoverability would make all other points a matter of convenience.

2. Instance Properties

An instance property suggests “mine”. My context’s Site, my site’s RootWeb, my feature’s Parent, etc. If something is “mine,” then I should be responsible for it. This behavior is already consistent with three exceptions:

SPWeb.ParentWeb

This property is tricky because the instance is shared for all consumers of the SPWeb (think SPContext.Web), but will not be cleaned up if the SPWeb is disposed and has no mechanism to notify the child that it has been disposed to null out the reference to it. So what to do? Well in accordance with my fantasy guidelines, there are two options, either of which would resolve the ambiguity:

  1. Leave ParentWeb as a property and require that the SPWeb dispose its parent as part of the clean-up process.
  2. Provide SPWeb.GetParentWeb() instead, returning a new SPWeb on each call that is owned by the developer.

Since GetParentWeb() can be implemented as an extension method, I guess I would prefer option 1.

SPList.ParentWeb

This property is almost fine, but for some reason has an edge case returns a new SPWeb (details here). It should be fine as a property, but that edge case needs to disappear. Why can’t it just return SPList.Lists.Web?

MOSS UserProfile.PersonalSite

This property returns a new SPSite on each call, so ideally it would be a method instead, perhaps GetPersonalSite. That seems much easier than having UserProfile implement IDisposable to facilitate cleanup of shared instance.

3. Constructors & Methods & Indexers, Oh My!

If properties are “mine”, then everything else is “yours”. Again, the vast majority of the API already fits this behavior. The few discrepancies:

SPControl.GetContextSite() & SPControl.GetContextWeb()

I believe these are the only methods that return IDisposables that SharePoint owns. MSDN clearly indicates that behavior, but for consistency the preferred usage (consistent with our fantasy guidelines) would be to use an SPContext’s Site and Web properties instead.

MOSS PublishingWeb

I’ve already discussed the semi-disposability of PublishingWeb, which just needs a few tweaks to fit nicely in my fantasy model:

  • Implement IDisposable! Close() is already implemented, just need the interface to take advantage of our languages’ using constructs.
  • I believe ParentPublishingWeb would be fixed if SPWeb.ParentWeb were disposed with the child. Alternatively, change to GetParentPublishingWeb and ensure that the returned PublishingWeb will close the internal SPWeb on dispose.
  • PublishingWebCollection.Add() is fine, but the indexer works in such a way that an internal SPWeb is created that is not cleaned up by PublishingWeb.Close(). I would consider this a bug in the current code base, which would naturally be fixed in my fantasy.

4. Enumerating Collections

This isn’t quite as simple as it would seem. When dealing with collections and enumerators, there are essentially two classes of operation:

  1. Enumerate over all or part of the collection and perform an operation. (LINQ Select, GroupBy, OrderBy)
  2. Enumerate over the collection to pick an object from the collection to return. (LINQ First, Last, ElementAt)

In my experience, the vast majority fall into the first category, including most of LINQ. These operations shouldn’t need to know that the objects being enumerated are IDisposable; it’s up to the enumerator to behave appropriately. Rather than sacrifice these many useful operations, I suggest that the enumerator should include proper disposal of the objects it creates—this is precisely what my AsSafeEnumerable() iterator does.

The second category can either be handled using for loops and indexers, which return objects that the caller must dispose, or through higher-order functions that allow dispose-safe operation on or selection from the desired element. But again, these seem to be the exception rather than the rule, and can be handled accordingly.

Wishful Thinking

The unfortunate reality is that most of these adjustments could never happen because breaking changes are generally frowned upon. But perhaps in framing our disposal thought process around some simple rules with documented exceptions, it may be easier to get things right without leaning on tools like SPDisposeCheck.

Posted in Object Model, SharePoint. Tags: , . Comments Off on SharePoint Disposal Wish List

Introducing SPWeb.GetParentWeb()

Official Microsoft guidance is to never explicitly Dispose() SPWeb.ParentWeb. I generally agree with this advice, given that my Rule #1 of SharePoint disposal is that “Using a disposed object can cause more problems than failing to dispose.” To understand why, I’ll borrow my explanation from SPDevWiki:

This property will allocate an SPWeb object the first time it is called. The caveat is that once it is disposed, any reference to the property will return the disposed object. If an SPWeb is not owned by the developer, its ParentWeb should be considered not owned as well. For example, there could be a problem if two components both depend on SPContext.Current.Web.ParentWeb and one calls Dispose() before the other is done with it.

However, this can result in memory pressure in cases involving enumeration or where the parent SPSite has a long lifetime. For example:

SPSite contextSite = SPContext.Current.Site;
foreach(SPWeb web in contextSite.AllWebs.AsSafeEnumerable())
{
    SPWeb webParent = web.ParentWeb; // Internal OpenWeb()
    // Do something with web and webParent
}

The web references are disposed by my safe iterator, but every webParent will remain open until the context SPSite is disposed. Not that I would recommend using code like this (in fact I would strongly urge against it), but you can never say never.

To that end, I propose a simple extension method whose contract is clear: always dispose me! We can still follow MS guidance regarding SPWeb.ParentWeb, but have convenient access to a developer-owned parent SPWeb as well:

[SPDisposeCheckIgnore(SPDisposeCheckID.SPDisposeCheckID_120, "By Design")]
public static SPWeb GetParentWeb(this SPWeb web)
{
    if(web == null)
        throw new ArgumentNullException("web");
    return web.Site.OpenWeb(web.ParentWebId);
}

And our “Best Practice” memory pressure can be revised slightly to achieve much better memory use:

SPSite contextSite = SPContext.Current.Site;
foreach(SPWeb web in contextSite.AllWebs.AsSafeEnumerable())
{
    using(SPWeb webParent = web.GetParentWeb())
    {
        // Do something with web and webParent
    }
}

Trivial? Obvious? Perhaps. But often the most useful code is.

Update: Included appropriate SPDisposeCheckIgnore attribute for “leaked” SPWeb from OpenWeb(); we know what we’re doing. That said, you could certainly implement higher-order functions to invoke an action or selector against our imitation ParentWeb without returning it—I’ll leave those as an exercise for the reader.

More SharePoint Higher-Order Functions

Though I haven’t actually used the term before, I’ve discussed a number of higher-order functions in the past. Simply put, a higher-order function either accepts a function as a parameter, returns a function, or both. The terminology might be foreign, but the technique is used all over the place:

Another use of higher-order functions is to ensure the existence of a SharePoint resource. For example, I often need to fetch a SharePoint list and create it if doesn’t exist. A standard implementation might look something like this:

public static SPList GetOrCreateList(this SPWeb web, string listName,
                                     string description, SPListTemplate template)
{
    SPListCollection webLists = web.Lists;
    SPList list = webLists.Cast<SPList>()
                          .FirstOrDefault(l => l.Title == listName);
    if (list == null)
    {
        Guid newListID = webLists.Add(listName, description, template);
        list = webLists[newListID];
    }
    return list;
}

While there’s nothing wrong with this implementation, per se, it’s not exceedingly flexible. What if we want to use a different overload of SPListCollection.Add? What if we need to elevate privileges to create the list? We could certainly create a dozen variations based on this pattern, but that’s a bunch of duplicate code that we would much rather avoid. Instead, we can use a single higher-order function:

public static SPList GetOrCreateList(this SPWeb web, string listName,
                                     Func<SPWeb, string, SPList> listBuilder)
{
    SPList list = web.Lists.Cast<SPList>()
                     .FirstOrDefault(l => l.Title == listName);
    if(list == null && listBuilder != null)
        list = listBuilder(web, listName);
    return list;
}

And then specify exactly how the list should be created. We could redefine our original method like this:

public static SPList GetOrCreateList(this SPWeb web, string listName,
                                     string description, SPListTemplate template)
{
    return GetOrCreateList(web, listName, (builderWeb, builderName) =>
    {
        var builderLists = builderWeb.Lists;
        Guid newListID = builderLists.Add(builderName, description, template);
        return builderLists[newListID];
    });
}

Or we can just as easily specify a builder that uses elevated privileges and a different Add overload:

public static SPList GetOrCreateTasksList(this SPWeb web)
{
    return GetOrCreateList(web, "Tasks", (builderWeb, builderName) =>
    {
        Guid newListId = web.SelectAsSystem(sysWeb =>
            sysWeb.Lists.Add(builderName, null, SPListTemplateType.Tasks));

        return builderWeb.Lists[newListId];
    });
}

Or my preference is to define a (testable) builder method and just use the higher-order function without a wrapper:

private static SPList CreateGenericList(SPWeb web, string name)
{
    var id = web.Lists.Add(name, null, SPListTemplateType.GenericList);
    return web.Lists[id];
}

void DoSomething(SPWeb web)
{
    SPList list = web.GetOrCreateList("Some List", CreateGenericList);
    if (list == null)
        throw new SPException("Some List does not exist and could not be created.");
    // Do something
}

GetOrCreateGroup

Another use for this pattern is the creation of SharePoint groups, inspired by Adam Buenz’s recent post. His code is correct (though I believe an ordinal comparison is more appropriate than invariant culture), but it can’t easily handle scenarios requiring elevation, AllowUnsafeUpdates, etc. Instead, we can define a higher-order function like this:

public static SPGroup GetOrCreateGroup(this SPWeb web, string groupName,
                                       Func<SPWeb, string, SPGroup> groupBuilder,
                                       Action<SPWeb, SPGroup> associateGroup)
{
    SPGroup group = web.SiteGroups.Cast<SPGroup>()
                        .FirstOrDefault(g =>
                            string.Equals(g.Name, groupName,
                                StringComparison.OrdinalIgnoreCase));
    if (group == null && groupBuilder != null)
        group = groupBuilder(web, groupName);
    if (group != null && associateGroup != null)
        associateGroup(web, group);
    return group;
}

With which the original method is easily rewritten:

public static SPGroup GetGroupOrCreate(SPWeb web, string name,
                                       string description, SPUser owner,
                                       SPUser defaultUser, bool associate)
{
    return web.GetOrCreateGroup(name,
        (builderWeb, builderName) =>
        {
            var builderGroups = builderWeb.SiteGroups;
            builderGroups.Add(builderName, owner, defaultUser, description);
            return builderGroups[name];
        },
        (assocWeb, assocGroup) =>
        {
            if (associate && !assocWeb.AssociatedGroups.Contains(assocGroup))
            {
                web.AssociatedGroups.Add(assocGroup);
                web.Update();
            }
        });
}

Again, the advantage is that we can easily tweak how the group is created and associated independent from the common get-or-create logic.