Visual Studio 2012 and webpages:Version

If you open an older ASP.Net MVC3 project in Visual Studio 2012, you may see lots of errors in the Razor views, along the lines of “The name 'model' does not exist in the current context”, and similar errors whenever you try to use MVC features like HTML helpers or ViewContext (eg, “System.Web.WebPages.Html.HtmlHelper does not contain a definition for TextBoxFor”).

This happens if there is no <add key="webpages:Version" value="1.0" /> in the <appSettings> element in Web.config.

Without this element, Visual Studio will assume that you’re using the latest version of Razor and the WebPages framework.  Until VS2012, this wasn’t a problem, since there was only one version.  However, since VS2012 ships with ASP.Net WebPages 2.0, the IDE will load this version by default.  Since you’ve specified the MVC integration & <configSection> for 1.0 (in Views/Web.config, since all of the assembly references specify Version=1.0.0.0), the language services won’t load the MVC settings.  Therefore, the @model directive will not work, and the view will inherit the standard WebPage base class rather than the MVC WebViewPage base class (which contains the MVC HTML helpers)

This issue has no effect at runtime because the server won’t load any version 2.0 assemblies.

Exploring Caller Info Attributes

Last year, Microsoft announced a simple new feature in C# 5: Caller Info Attributes.  These attributes let you to create methods with optional parameters and tell the compiler to pass the caller’s filepath, line number, or member name instead of the parameter’s default value.  This allows you to create logging methods that automatically know where they’re being called.

When the feature was announced, I wrote a couple of blog posts that delved into some of the corner cases of the new feature.  At the time, there was no public implementation, so they were pure conjecture.

This morning, Microsoft released the beta of Visual Studio 11, which is the first public build supporting these attributes.  Now, I can finally test my theories.  Here are the results:

Although these classes are new to the .Net Framework 4.5, you can still use this feature against older framework versions by creating your own classes in the System.Runtime.CompilerServices namespace.  However, the feature will only work if the code calling the method is compiled with the C# 5 compiler; older compilers will ignore the attributes and simply pass the parameters’ default values.

All of the attributes can only be applied to arguments of types that have standard (not custom) implicit conversions to int or string.  This means that it isn’t practical to overflow [CallerLineNumber] (the compiler ran out of memory first), so I can’t test how that behaves.

Using [CallerMemberName] on field initializers passes the field name, and on static or instances constructors passes the string ".cctor" or ".ctor" (as documented)  In indexers, it passes "Item".

If a class has a constructor that takes only caller info attribute parameters, and you create another class that inherits it and does not declare a constructor (thus implicitly passing optional parameters), it passes the line number and file name of the class keyword in the derived class, but leaves the declared default for the member name (I suspect that’s a bug). 

If you do declare a constructor, it passes the string ".ctor" as the member name for the implicit base() call (just like a normal method call from inside a constructor) and the line number of the beginning of the constructor declaration.  If you actually write a base() call, it passes the line number of the base keyword.

If a call spans multiple lines, [CallerLineNumber] passes the line containing the openning parenthesis.

Delegates are fully supported; if you call a delegate that has an argumented annotated with a caller info attribute, the compiler will insert the correct value, regardless of the method you’re actually calling (which the compiler doesn’t even know).

LINQ query comprehension syntax is not supported at all; if you create a (for example) Select() method that contains a caller info attribute, then call it from a LINQ query (not lambda syntax), the compiler will crash (!).  (they will fix that)

Expression trees do not support optional parameters at all, so that corner case is irrelevant.

Attributes are the most interesting story.  What should happen if you declare a custom attribute that takes parameters with caller info attributes, then apply that attribute in various cases?  This could potentially be very useful, since there is currently no way for an attribute to know what it’s being applied to. (I hadn’t thought of this usage when I wrote the original blog post)

The documentation says that this will work in all cases, and that [CallerMemberName] will pass whatever the attribute is being applied to.  However, in the beta build, this doesn’t always work.

Attributes applied to method arguments or return values do not pass any caller info at all.  Attributes applied to types or generic type arguments do not pass member names (this is very disappointing)

Hopefully, those will be fixed before release.

ASP.Net MVC Unobtrusive Validation Bug

If you use the ASP.Net MVC 3 [Compare] validation attribute on a model property, then include that model as a property in a parent model (so that the field name becomes Parent.ChildProperty), the built-in unobtrusive client validation will choke, and will always report the field as having an error.

This is due to a bug on line 288 of jquery.validate.unobtrusive.js:

adapters.add("equalto", ["other"], function (options) {
    var prefix = getModelPrefix(options.element.name),
        other = options.params.other,
        fullOtherName = appendModelPrefix(other, prefix),
        element = $(options.form).find(":input[name=" + fullOtherName + "]")[0];

    setValidationValues(options, "equalTo", element);
});

Because the value of the name attribute selector is not quoted, this fails if the name contains a ..

The simplest fix is to add quotes around the concatenated value.  However, the jQuery selector there is overkill.  HTML form elements have properties for each named input, so you can do this instead:

adapters.add("equalto", ["other"], function (options) {
    var prefix = getModelPrefix(options.element.name),
        other = options.params.other,
        fullOtherName = appendModelPrefix(other, prefix),
        element = options.form[fullOtherName];
    if (!element)
        throw new Error(fullOtherName + " not found");
    //If there are multiple inputs with that name, get the first one
    if (element.length && element[0])        
        element = element[0];
    setValidationValues(options, "equalTo", element);
});

Protecting against CSRF attacks in ASP.Net MVC

CSRF attacks are one of the many security issues that web developers must defend against.  Fortunately, ASP.Net MVC makes it easy to defend against CSRF attacks.  Simply slap on [ValidateAntiForgeryToken] to every POST action and include @Html.AntiForgeryToken() in every form, and your forms will be secure against CSRF.

However, it is easy to forget to apply [ValidateAntiForgeryToken] to every action.  To prevent such mistakes, you can create a unit test that loops through all of your controller actions and makes sure that every [HttpPost] action also has [ValidateAntiForgeryToken]. 

Since there may be some POST actions that should not be protected against CSRF, you’ll probably also want a marker attribute to tell the test to ignore some actions.

This can be implemented like this:

First, define the marker attribute in the MVC web project.  This attribute can be applied to a single action, or to a controller to allow every action in the controller.

///<summary>Indicates that an action or controller deliberately 
/// allows CSRF attacks.</summary>
///<remarks>All [HttpPost] actions must have 
/// [ValidateAntiForgeryToken]; any deliberately unprotected 
/// actions must be marked with this attribute.
/// This rule is enforced by a unit test.</remarks>
[AttributeUsage(AttributeTargets.Class | AttributeTargets.Method)]
public sealed class AllowCsrfAttacksAttribute : Attribute { }

Then, add the following unit test:

[TestMethod]
public void CheckForCsrfProtection() {
    var controllers = typeof(MvcApplication).Assembly.GetTypes().Where(typeof(IController).IsAssignableFrom);
    foreach (var type in controllers.Where(t => !t.IsDefined(typeof(AllowCsrfAttacksAttribute), true))) {
        var postActions = type.GetMethods()
                                .Where(m => !m.ContainsGenericParameters)
                                .Where(m => !m.IsDefined(typeof(ChildActionOnlyAttribute), true))
                                .Where(m => !m.IsDefined(typeof(NonActionAttribute), true))
                                .Where(m => !m.GetParameters().Any(p => p.IsOut || p.ParameterType.IsByRef))
                                .Where(m => m.IsDefined(typeof(HttpPostAttribute), true));

        foreach (var action in postActions) {
            //CSRF XOR AntiForgery
            Assert.IsTrue(action.IsDefined(typeof(AllowCsrfAttacksAttribute), true) != action.IsDefined(typeof(ValidateAntiForgeryTokenAttribute), true),
                            action.Name + " is [HttpPost] but not [ValidateAntiForgeryToken]");
        }
    }
}
typeof(MvcApplication) must be any type in the assembly that contains your controllers.  If your controllers are defined in multiple assemblies, you’ll need to include those assemblies too.

The Dark Side of Covariance

What’s wrong with the following code?

var names = new HashSet<string>(StringComparer.OrdinalIgnoreCase);
...
if (names.Contains(sqlCommand.ExecuteScalar())

This  code is intended to check whether the result of a SQL query is contained in a case-insensitive collection of names.  However, if you run this code, the resulting check will be case-sensitive.  Why?

As you may have guessed from the title, this is caused by covariance.  In fact, this code will not compile at all against .Net 3.5. 

The problem is that ExecuteScalar() returns object, not string.  Therefore, it doesn’t call HashSet<string>.Contains(string), which is what it’s intending to call (and which uses the HashSet’s comparer).  Instead, on .Net 4.0, this calls the  Enumerable.Contains<object>(IEnumerable<object>, string) extension method, using the covariant conversion from IEnumerable<string> to IEnumerable<object>.  Covariance allows us to pass object to the Contains method of any strongly-typed collection (of reference types).

Still, why is it case-sensitive?  As Jon Skeet points out, the LINQ Contains() method is supposed to call any built-in Contains() method from ICollection<T>, so it should still use the HashSet’s case-insensitive Contains().

The reason is that although HashSet<String> implements ICollection<string>, it does not implement ICollection<object>.  Since we’re calling Enumerable.Contains<object>, it checks whether the sequence implements ICollection<object>, which it doesn’t.  (ICollection<T> is not covariant, since it allows write access)

Fortunately, there’s a simple fix: just cast the return value back to string (and add a comment explaining the problem).  This allows the compiler to call HashSet<string>.Contains(string), as was originally intended.

//Call HashSet<string>.Contains(string), not the
//covariant Enumerable.Contains(IEnumerable<object>, object)
//http://blog.slaks.net/2011/12/dark-side-of-covariance.html
if (names.Contains((string)sqlCommand.ExecuteScalar())
(I discovered this issue in my StringListConstraint for ASP.Net MVC)

CAPTCHAs do not mitigate XSS worms

One common misconception about web security is that protecting important actions with CAPTCHAs can prevent XSS attacks from doing real damage.  By preventing malicious code from scripting critical tasks, the idea goes, XSS injections won’t be able to accomplish much.

This idea is dangerously wrong. 

First of all, this should not even be considered except as a defense-in-depth mechanism.  Regardless of whether the actions you care about are protected by CAPTCHAs, XSS attacks can create arbitrary UI on your pages, and can thus make “perfect” phishing attacks.

Also, even with CAPTCHAs, an XSS injection can wait until the user performs the critical action, then change the submitted data to the attacker’s whim.

For example, if Twitter took this approach to prevent XSS injections from sending spammy tweets, the attacker could simply wait until the user sends a real tweet, then silently append advertising to the tweet as the user submits it and fills out the CAPTCHA.

However, there is also a more fundamental issue.  Since the injected Javascript is running in the user’s browser, it simply display the CAPTCHA to the user and block all page functionality until the user solves the CAPTCHA.  The attacker can even put his own text around the CAPTCHA to look like a legitimate security precaution, so that the (typical) user will not realize that the site has been compromised.  (that could be prevented by integrating a description of the action being performed into the CAPTCHA itself in a way that the attacker can’t hide)

I haven’t even mentioned the inconvenience of forcing all legitimate, uncompromised users to fill out CAPTCHAs every time they do anything significant.

In summary, CAPTCHAs should only be used to prevent programs from automatically performing actions (eg, bulk-registering Google accounts), and as a rate-limiter if a user sends too many requests too quickly (eg, getting a password wrong too many times in a row).

XSS can only be stopped by properly encoding all user-generated content that gets concatenated into markup (whether HTML, Javascript, or CSS)

About Concurrent Collections

One of the most useful additions to the .Net 4.0 base class library is the System.Collections.Concurrent namespace, which contains an all-new set of lock-free thread.

However, these collections are noticeably different from their classical counterparts.  There is no simple ConcurrentList<T> that you can drop into your code so that it will become thread-safe.  Instead, the new namespace has a queue, a stack, and some new thing called a bag, as well as ConcurrentDictionary<TKey, TValue> that largely resembles classical dictionaries.  It also has a BlockingCollection<T> class that wraps a concurrent collection and blocks until operations can succeed.

Many people have complained that Microsoft chose to provide an entirely new set of classes, rather than adding synchronized versions of the existing collections. 

In truth, however, creating this new set of classes is the correct – and, in fact, only – choice.  Ordinary synchronized are rarely useful and will not make a program thread-safe.  In general, slapping locks everywhere does not make a program thread-safe!  Rather, that will either not help (if there aren’t enough locks) or, if there are enough locks, result in deadlocks or a program that never runs more than one thread at a time.  (depending on how many different objects get locked)

Collections in particular have two fundamental issues when used on multiple threads.

The first, and simpler, issue is that the collection classes themselves are not thread-safe.  If two threads add to a List<T> at the same exact time, one thread is likely to overwrite the other thread’s value.  A synchronized version of List<T> with a lock around every method would solve this problem.

The bigger issue is that any code that uses a List<T> is unlikely to be thread-safe, even if the list itself is thread-safe.  For example. you can never enumerate over a multi-threaded list, because another thread may change the list at any time, invalidating the enumerator.  This issue could be solved by taking a read lock for the lifetime of the enumerator.  However, that is also a bad idea, since if any client code forgets to dispose the enumerator, the collection will deadlock when written to.

You also cannot use indices.  It is never safe to get, set, or remove an item at an index, because another thread might remove that item or clear the entire collection between your index check and the operation.

To solve all of these problems, you need thread-safe collections that provide atomic operations.  This is why all of the concurrent collections have such strange methods, including TryPop, AddOrUpdate, and TryTake.  These methods perform their operations atomically, and return false if the collection was empty (as appropriate; consult the documentation for actual detail).  Thus, the new concurrent collections can be used reliably in actual multi-threaded code without a separate layer of locks.