Welcome to Technolog Sign in | Join | Help

Would you like a to boost your class which hosts a collection, and utilize a cool C# language feature, which is collection initializer support?

For instance var myList = new List<int>() {1,3,4} where {1,3,4} is the initializer.

The basic idea is to implement IEnumerable<T>, and IEnumerator<T> and the class must have an ‘Add’ method with the correct signature.

I made the class generic so you can initialize any array (ok, not for structs and simple types, but you modify a few things and that works too)

// our sample class for iteration feeding

public class Address
   {
       public string Street { get; set; }
       public string StreetNumber { get; set; }
       public string ZipCode { get; set; }
       public string City { get; set; }
   }
   public static class Program
   {
       static void Main()
       {
           var t = new ClassWithInitializerSupport<Address> {
               new Address()
               {
                   Street = "Mainstr.",
                   StreetNumber = "1",
                   City ="London"
               },
              new Address()
               {
                   Street = "De Dam",
                   StreetNumber = "1",
                   City ="Amsterdam"
               },
               new Address()
               {
                   Street = "Mangostreet",
                   StreetNumber = "123",
                   City ="New York"
               }
           };
           foreach(var a in t)
           {
               Console.WriteLine("Street {0}, City {1}", a.Street, a.City);
           }

}

public class ClassWithInitializerSupport<T> : IEnumerable<T>, IEnumerator<T> where T : class
   {
       public ClassWithInitializerSupport()
       {
           arr = (T[])Array.CreateInstance(typeof(T), 0);
           pos = -1;
       }
       private T[] arr;
       private int pos;
       public T Current => arr[pos];

       object IEnumerator.Current => arr[pos];

       public void Dispose()
       {

       }

       public IEnumerator<T> GetEnumerator()
       {
           return this;
       }
       public void Add(T v)
       {
           var p = arr.Length;
           Array.Resize(ref arr, p + 1);
           arr[p] = v;
       }
       public bool MoveNext()
       {
           if (pos + 1 < arr.Length)
           {
               pos++;
               return true;
           }
           return false;
       }

       public void Reset()
       {
           pos = -1;
       }

       IEnumerator IEnumerable.GetEnumerator()
       {
           return this;
       }
   }

The Microsoft OData library Roadmap? Please some insights?

WP_20140923_005Microsoft has taken the road of open source and the community. Right, but I guess that this is the reason, that some libraries, in the past maintained by strictly managed teams, now are more or less ‘loosely’ managed.

About OData and bugs and questions, I have seen comments in the community, that suggest that Microsoft abandons maintenance on oData, however, I doubt this is the case, since Azure heavily depends on Odata.

For the reason of better versioning and no dependencies on Windows Updates for .NET components, .NET library has been split up, into smaller parts, the future part is ‘.NET Core’ and the current path, but popular as well, is .NET for which we see a mysterious overlap in versions, so say, if you target your assembly, you could target it using Visual Studio 2015 and C# to both as well .NET 462 and .Net Core 1.6 (standard core library). My focus on this posting, is not the .NET versions but the OData versions.

It looks as if OData 5.8.x is still the most popular used OData Service library, while Microsoft guys, also created a System.WEb.Odata 6.0.0 version, which is compatible with the ‘Next’ platform of .NET (say, .NET Core, I am not quite sure what exactly is going on here).

To make it worse, if you develop on Azure services, some clients, such as Azure Search Index Client, use OData 5.8 while DocumentDB already likes 6.0.0, these two odata branches, bite each other.

Anyway, OData 6.0.0 has improved functionality when we talk about modelling, but it has a lot of BUGS in the Controller handling section, that still, since the release of it have not been solved.

So, If you have these libraries, which are from the nuget packages.config file, read on!

<package id="Microsoft.AspNet.OData" version="6.0.0" targetFramework="net462" />

<package id="Microsoft.OData.Core" version="7.1.1" targetFramework="net462" />
<package id="Microsoft.OData.Edm" version="7.1.1" targetFramework="net462" />

For complete details about the bug, I have created an Issue on Github, where the OData WebApi Git Repository resides.

Now the question, how to workaround the BUG?

Good news, you can workaround it, but the workaround requires you to break some consistancy with your OData API behavior. I tried to minimize the damage as much as possible AND if Microsoft fixes these, you just could rename a few things and your OData API Controller, should work on!

Second, I just could post the binaries here, and say: “Good luck with it!” But it also explains a few other things

  1. How to intercept an OData server method by getting binary data and do it yourselves?
  2. How to deal with ‘custom’ serialisation on Odata, see, using Newtonsoft.JSON. (Which is possible, but I do not recommend it for 6.0.0)
  3. How to impress your team members with this neat OData PATCH-patch? Glimlach

 

My Odata Controller layout

Say, this is my Controller. You will recognize it, it’s not very different probably.

[EnableQuery]
[Authorize]
[ODataRoutePrefix("companies")]
public class CompaniesController : BaseODataController

private static readonly ODataValidationSettings settings = new ODataValidationSettings()
       {
           // Initialize settings as needed.
           AllowedFunctions = AllowedFunctions.IndexOf | AllowedFunctions.ToLower | AllowedFunctions.All | AllowedFunctions.Any | AllowedFunctions.AllStringFunctions //includes contains
       };

public CompaniesController(IManager<company> companyManager, // DEAL with IoC, just a sample
       )
    {
        _companyManager = companyManager;
        }

try
           {
               options.Validate(settings);
           }
           catch (Exception ex)
           {
               return BadRequest(ex.Message);
           }
           try
           {
               var data = (await _companyManager.SearchAsync(options));

               return Ok(data);
           }
           catch (Exception ex)
           {
               return BadRequest(ex.Message);
           }

 


       [EnableQuery]
     public async Task<IHttpActionResult> Get(ODataQueryOptions<company> options)
     {

try
           {
               options.Validate(settings);
           }
           catch (Exception ex)
           {
               return BadRequest(ex.Message);
           }
           try
           {
               var data = (await _companyManager.SearchAsync(options));

               return Ok(data);
           }
           catch (Exception ex)
           {
               return BadRequest(ex.Message);
           }

}

 

public async Task<IHttpActionResult> Delete(Guid key)
     {
         if (!ModelState.IsValid)
         {
             return BadRequest(ModelState);
         }
         var result = await _companyManager.DeleteAsync(key);
         if (!result)
         {
             return BadRequest(_companyManager.GetValidator());
         }
         return this.StatusCode(HttpStatusCode.NoContent);
     }

public async Task<IHttpActionResult> Post(company c)
     {
         if (!ModelState.IsValid)
         {
             return BadRequest(ModelState);
         }
         c.id = Guid.NewGuid();

   var success = await _companyManager.InsertAsync(c);

if (!success)
              {
                  return BadRequest(_companyManager.GetValidator());
              }
    return Created(c);

}


// NOW we come to the BUG on Odata 6.0.0. The OData PATCH method would be USEless if you consider the fact, that complex entitytypes canNOT be patched as of the moment of writing.

public async Task<IHttpActionResult> Patch(Guid key, [DeltaBody] Data.OData.Delta<company> delta)
     {

         if (delta == null)
         {
             return BadRequest("delta cannot be null");
         }
         var instance = delta.GetEntity();

         Validate(instance);

         if (!ModelState.IsValid)
         {
             return BadRequest(ModelState);
         }
        
         var curCompany = await _companyManager.GetById(key);
         if (curCompany == null)
         {
             return NotFound();
         }
             delta.Patch(curCompany);

           try
         {

             var result = await _companyManager.UpdateAsync(curCompany);
             if (!result)
             {
                 return BadRequest(_companyManager.GetValidator());
             }
             if (WantContent)
             {
                 return Content(HttpStatusCode.Created, curCompany);
             }
             return Updated(curCompany);
         }
         catch (Exception ex)
         {

//yadda
                 }
     }

DeltaBodyAttribute my little Gem!

Rick Strahl on Rick Strahls custom Body filter helped me to customize my PATCH method. Normally, the Delta<> would be like System.Web.OData.Delta<company>. But now we have to tweak this code into some Delta override which is a duplicate code of OData 5.8 where the Delta Patch works.

AS you can see, your PATCH method, only needs this adaption, and later on, you could change it back to the intended OData behavior if Microsoft fixes the library version 6.0.0 or higher.

Keep in mind!

  1. Since we use NewtonSoft serialisation, ODataConventionModelBuilder, is ignored. All configuration, such as how to deal with nullables, capitalisation, complex property serialisation, must be emulated by Newtonsoft. OData has it’s own serialisation fabric, and especially for 6.0.0 the samples simply don’t work, and documentation lacks. (Enough with the rants, for now)
  2. Your model, that normally applies to Odata, must be copied so Newtonsoft works.
  3. I don’t use camelcase/ruby on rails naming conventions and JSonProperty tricks for our frontend HTML consumers, to get my C# naming conventions ‘right’. So my classes (POC) are really like this:  public string first_name {get;set;}
    This is to avoid further complex compatibility code, between OData entitytypes and our ‘patch’ that uses Newtonsoft. 
  4. if you use dynamic_properties, for your entitymodel, which OData supports, they are called Open Type definitions, this might bite
  5. Exceptions, during serialisation, will be Newtonsoft exceptions, not OData exceptions.

If you keep these changes in mind, maybe, you could live with this workaround? Because after all,

DeltaBodyAttribute

using System;
using System.Web.Http;
using System.Web.Http.Controllers;


// see https://github.com/RickStrahl/AspNetWebApiArticle/tree/master/AspNetWebApi

namespace MyMuke.Data.OData
{
    /// <summary>
    /// An attribute that captures the entire content body and stores it
    /// into the parameter of type <see cref="Data.OData.Delta{TEntityType}"/>
    /// </summary>
    /// <remarks>
    /// The parameter marked up with this attribute should be the only parameter as it reads the
    /// entire request body and assigns it to that parameter.   
    /// </remarks>
    [AttributeUsage(AttributeTargets.Class | AttributeTargets.Parameter, AllowMultiple = false, Inherited = true)]
    public sealed class DeltaBodyAttribute : ParameterBindingAttribute
    {
        /// <summary>
        ///
        /// </summary>
        /// <param name="parameter"></param>
        /// <returns></returns>
        public override HttpParameterBinding GetBinding(HttpParameterDescriptor parameter)
        {
            if (parameter == null)
                throw new ArgumentNullException(nameof(parameter));

            return new DeltaBodyParameterBinding(parameter);
        }
    }
}

DeltaBodyParameterBinding

using System;
using System.Threading;
using System.Threading.Tasks;
using System.Web.Http.Controllers;
using System.Web.Http.Metadata;
using System.Linq;

 

namespace MyMuke.Data.OData
{
    /// <summary>
    /// Reads the Request body into a Delta&lt;&gt; <see cref="Delta{TEntityType}"/>
    /// assigns it to the parameter bound.
    ///
    /// Should only be used with a single parameter on
    /// a Web API method using the [RawBody] attribute
    /// </summary>
    public class DeltaBodyParameterBinding : HttpParameterBinding
    {
        /// <summary>
        /// ctor
        /// </summary>
        /// <param name="descriptor"></param>
        public DeltaBodyParameterBinding(HttpParameterDescriptor descriptor)
            : base(descriptor)
        {

        }

        /// <summary>
        /// Check for simple
        /// </summary>
        /// <param name="metadataProvider"></param>
        /// <param name="actionContext"></param>
        /// <param name="cancellationToken"></param>
        /// <returns></returns>
        public override Task ExecuteBindingAsync(ModelMetadataProvider metadataProvider,
                                                    HttpActionContext actionContext,
                                                    CancellationToken cancellationToken)
        {
 
            if (
                !actionContext.Request.Method.Method.Equals("PATCH", StringComparison.InvariantCultureIgnoreCase))// PUT ~Patch is considered
                return Task.FromResult(0);

           var binding = actionContext
                .ActionDescriptor
                .ActionBinding;
            var type = binding
                        .ParameterBindings.FirstOrDefault(f => f is DeltaBodyParameterBinding)
                        .Descriptor.ParameterType;

            if (type.IsGenericType && type.GetGenericTypeDefinition().IsAssignableFrom(typeof(Data.OData.Delta<>)))
            {
                return actionContext.Request.Content
                        .ReadAsStreamAsync()
                        .ContinueWith((task) =>
                        {
                           // create instance of e.g. Delta<company>()
                            var delta = (Delta)Activator.CreateInstance(typeof(Data.OData.Delta<>).MakeGenericType(type.GetGenericArguments()));

                            DeltaCopyUtil.CopyEntityToDelta(delta, task.Result);

                            SetValue(actionContext, delta);
                        });
            }
        

            throw new InvalidOperationException("Only Delta parameters");
        }
        /// <summary>
        /// returns blah
        /// </summary>
        public override bool WillReadBody
        {
            get
            {
                return true;
            }
        }
    }
}

DeltaCopyUtil

using Newtonsoft.Json;
using Newtonsoft.Json.Linq;
using System;
using System.Diagnostics;
using System.IO;
using System.Linq;
using System.Reflection;


namespace MyMuke.Data.OData
{
   
 
    public static class DeltaCopyUtil
    {      
        public static void CopyEntityToDelta(Delta delta, Stream origMessage)
        {         
            var jOb = default(JToken);
            object baseObjFromJSON = null;
            using (var mem = new MemoryStream((int)origMessage.Length))
            {
                origMessage.CopyTo(mem);
                mem.Position = 0;
                var sr = new StreamReader(mem);

                var ser = new JsonSerializer();
                ser.MissingMemberHandling = MissingMemberHandling.Ignore;
                // deserialize twice
                // one time as JTOken, the other time to get TEntityType
                try
                {
                    baseObjFromJSON = ser.Deserialize(new JsonTextReader(sr), ((dynamic)delta).EntityType);
                    mem.Position = 0;
                    jOb = JToken.ReadFrom(new JsonTextReader(sr));
                }
                catch (Exception ex)
                {
                    var msg = ex.Message;
                    throw ex;
                }
            }

            //no need to retrieve nested objects, I think, since the base object must be included if a subobject was changed
            var keysGivenInJson = jOb.Children().Select(s => s.Name).ToList();
            if (keysGivenInJson.Contains("id")) // do not patch the primary key
            {
                keysGivenInJson.Remove("id");
            }
            var backupType = baseObjFromJSON.GetType();
            var tobeCopied = backupType.GetProperties()
                    .Where(p => keysGivenInJson.Contains(p.Name) && (p.GetSetMethod() != null || p.PropertyType.IsCollection()) && p.GetGetMethod() != null)
                    .Select>(p => new FastPropertyAccessor(p)) .ToDictionary(p => p.Property.Name); 
               foreach (var prop in tobeCopied) 
             { 
                    var val = prop.Value.GetValue(baseObjFromJSON); 
                   var result = delta.TrySetPropertyValue(prop.Key, val); 
                   if (result == false) 
                  { 
                     Trace.TraceWarning("CopyEntityToDelta: Cannot set property {0} on Entity type {1}", prop.Key, backupType.Name); 
                   } 
             } //any keys are in JSON, while not in the EntityType as a property?, They are OData OpenType 
              var keysdynamic = keysGivenInJson.Where(s => !tobeCopied.Keys.Contains(s)); 
                  if (keysdynamic?.Any() ?? false) 
                    { var entityValue = ((dynamic)delta).GetEntity(); 
                     //copy to delta.GetEntity() the dynamic properties 
                     var targetEntCollection = backupType.GetProperty("dynamic_properties", BindingFlags.SetProperty | BindingFlags.Public | BindingFlags.Instance); 
                         if (targetEntCollection != null) 
                         { targetEntCollection.SetValue(entityValue, keysdynamic.ToDictionary(k => k, jOb.Value)); 
                         } 
                       } 
                    }
                   }
                   } 
 

The rest of the code

I will not bother you with the rest of the code. It is a cherry pick of the code that I got from GitHub that works for OData 5.8.0 where the Patch method works it was fixed there, but not ‘here’ Glimlach 

You can download the complete code, including on this page FIX OData 6.0.0 Patch Method.

This also means, that we have var instance = delta.GetEntity(); instead of var instance = delta.GetInstance() (a typical change that OData 6.0.0 includes compared to 5.8.x)

 

I hope, this saves you from many hours, days, of frustrating research, on trying to get OData do for you what you expect it to do.

 

Inherit Microsoft.Azure.Documents.Resource? No

We can be short, just don’t. If you do, your library gets bloated and you need to add references to documentdb to ‘cross-reffed’ libraries as well, because of the ‘Resource’ dependency.

Use a POC based ‘base object’? Yes

If you implement DocumentDB for the first time, you’ll quickly find that documentation and old samples, suggest you, to use the Build in Microsoft.Azure.Documents.Resource base object properties, that DocumentDB supports.

It’s quite simple, don’t use them. There is an improved syntax, like using mongodb, which enables you to define your own id and such.

say, this one

public class BaseObject
    {
        public BaseObject()
        {
            UpdateTimeStamp();
        }
             public void UpdateTimeStamp()
        {
            this.timestamp = DateTime.UtcNow;

        }
 
        [ModelIgnore]
        public virtual int? document_type { get; set; }

        [JsonProperty("id", Required = Required.DisallowNull)]
           public Guid id { get; set; }


        [JsonProperty("timestamp", Required = Required.DisallowNull)]
        public DateTime timestamp { get;  set; }
        /// <summary>
        /// warning, do not SET this is a calculated field
        /// </summary>
        [JsonProperty("name", Required = Required.DisallowNull)]
        public string name { get; set; }

}

 

Now, your DocumentDB Context class (or whatever you named it) could have a method like this

 

public async Task<bool> DeleteItemsAsync(BaseObject item)
        {
                      var collection = MetaUtil.CollectionId(item.GetType());
            
            //calculate the URL ourselves
            // this differs from SelfLink but seems to work!
          
var docuId = UriFactory.CreateDocumentUri(DatabaseId, collection, item.id.ToString());
            try
            {
     
                var response = await _client.DeleteDocumentAsync(docuId);

                   return response.StatusCode == HttpStatusCode.NoContent;
            }
            catch (Exception ex)
            {
                Trace.TraceError("DeleteItem failed {0}", ex);
                return false;
            }
        }

 

 

As you can use, there is a UriFactory class, that contains a lot of static uri creators, for any object type, that DocumentDB supports.

B.t.w. I like DocumentDB. After finding out about https://azure.microsoft.com/en-us/blog/azure-documentdb-bids-fond-farewell-to-self-links/, I quickly could ‘unbloat’ the library :)

Just a tiny gem, which not often would be required but it can save you some time. It also demonstrates the power of the IXmlReader in unmanaged code. Because, as far I am aware of, the processing time a .config file measuring it with TickCount always is 0 ms (too small to measure). Microsoft has optimized the XML Reader implementation for fast forward reading, and it also does not allocate strings in memory, it just passes the pointer to the unicode strings (either key or value). In line with that, you might appreciate :) why I attach to the BSTR key to find as well.

What this class does, it reads the <appSettings>  section and puts the key value pairs in a ‘named value collection’ item.

Note 1: I am a big fan of CComBSTR when the final client still understands COM/automation. That is the reason I did not use CString in this class. In addition, the CComBSTR class has been boosted by me, to optimize reallocation of existing memory. But you can use the default MS implementation as well. So, you can change CSimpleMap to CSimpleMap<CString, CString> if you wish.

Note 2: The .config file is cached but it is parsed again if the filewritetime of the .config file was changed.

config.h header.

#include <xmllite.h>
#include <map>
#pragma once


using namespace ATL;

class ConfigurationManager
{
private:
    static const int DELAYTICKS = 1000;
    std::map<CComBSTR, CComBSTR> _map;
    time_t _ftLastCheck;
    CComPtr<IXmlReader> _xmlReader;
    CComPtr<IMalloc> _malloc;
    HRESULT CheckTimeOut();
    //ansi version!
    CComBSTR _szFilePath;
   
    void Init();

public:
    ConfigurationManager();
    ConfigurationManager(const BSTR configFile);
    std::wstring& AppSettings(const std::wstring key, PCWSTR defaultValue = NULL);
    BSTR AppSettings(const BSTR key, PCWSTR defaultValue = NULL);
    time_t GetFileTime();
    ~ConfigurationManager();
};

 

Implementation:


#include <ctime>
#include <sys/stat.h>
#include "config.h"
#pragma comment(lib, "xmllite.lib")


ConfigurationManager::ConfigurationManager(const BSTR configFile) throw()
{
    _szFilePath = configFile;
    time(&_ftLastCheck);
    Init();
}
ConfigurationManager::ConfigurationManager() throw()
{   
    time(&_ftLastCheck);
    _szFilePath.Attach(GetModulePath());   
    if (!_szFilePath.IsEmpty())
    {       
        _szFilePath.Append(L".config");
       
        Init();
    }   
}
void ConfigurationManager::Init() throw()
{
    if (!_szFilePath.IsEmpty())
    {
        HRESULT hr = CoGetMalloc(1, &_malloc);
        hr = CreateXmlReader(IID_IXmlReader, (void**)&_xmlReader, _malloc);
        if (FAILED(hr))
        {
   
        }
    }
}
time_t ConfigurationManager::GetFileTime() throw()
{   
    struct stat stResult;
    CComBSTR ansi(_szFilePath);
    ansi.Attach(ansi.ToByteString());
    ::stat((char*)ansi.m_str, &stResult);        // get the attributes of afile.txt
   
    return stResult.st_mtime;
}
BSTR ConfigurationManager::AppSettings(const BSTR key, PCWSTR defaultValue) throw()
{
   
    HRESULT hr = CheckTimeOut();
    if (FAILED(hr))
    {
      
        return NULL;
    }
    CComBSTR find;
    find.Attach(key);
   
    auto found = _map.find(find);
    find.Detach();
       if (found != _map.end())
    {
        return found->second.Copy();
    }
    else if (defaultValue != NULL)
    {
        return ::SysAllocString(defaultValue);
    }
    return NULL;
   
}
ConfigurationManager::~ConfigurationManager() throw()
{
    _map.clear();
    _xmlReader.Release();
    _malloc.Release();
    _szFilePath.Empty();
}
HRESULT ConfigurationManager::CheckTimeOut() throw()
{
   
    auto curT = GetFileTime();
   
    PCWSTR pwzValue;
    auto memResult = ::difftime(curT, _ftLastCheck);
    if (memResult != 0.0F)
    {
        DWORD start = ::GetTickCount();
   
        HRESULT hr = S_OK;
       
        CComPtr<IStream> pStream;
        CComPtr<IXmlReaderInput> _readerInput;
       
        hr = ::SHCreateStreamOnFileEx(_szFilePath, STGM_READ | STGM_SHARE_DENY_NONE, FILE_ATTRIBUTE_NORMAL, FALSE,NULL, &pStream);

        if (SUCCEEDED(hr))
        {
            hr = ::CreateXmlReaderInputWithEncodingCodePage(pStream, _malloc, CP_UTF8, TRUE, NULL, &_readerInput);           
            hr = _xmlReader->SetProperty(XmlReaderProperty_DtdProcessing, DtdProcessing_Prohibit);   
            hr = _xmlReader->SetInput(_readerInput);
        }   
        else
        {
            return hr;
        }
       
        XmlNodeType nodeType = XmlNodeType::XmlNodeType_None;
        UINT lenValue;
        PCWSTR key;
        bool startCollecting  = false;
        while (S_OK == _xmlReader->Read(&nodeType) && hr == S_OK)
        {
            switch(nodeType) {
            case XmlNodeType::XmlNodeType_EndElement:
               
                //hr = pReader->GetDepth(&dept);
                hr = _xmlReader->GetLocalName(&pwzValue, NULL);
                if (startCollecting && lstrcmpW(pwzValue, L"appSettings") == 0)
                {
                    //break loop
                    hr = S_FALSE;
                }
                break;
            case XmlNodeType::XmlNodeType_Element:
                {
                    // get element name such as option in <option value="11">
                    hr = _xmlReader->GetLocalName(&pwzValue, NULL);
           
                    if (FAILED(hr)) break;
                   
                    //iOrdinalCount++;
                    if (startCollecting == false && lstrcmpW(pwzValue, L"appSettings") == 0)
                    {
                        startCollecting = true;

                        hr = _xmlReader->MoveToAttributeByName(L"configSource", NULL);
                        if (hr == S_OK)
                        {
                            hr = _xmlReader->GetValue(&pwzValue, NULL);
                            
                                              
                            if (::PathIsRelativeW(pwzValue) == TRUE)
                            {
                                //TODO: call back to do a Server.MapPath
                                _szFilePath.Attach(FileStripFile(_szFilePath));                               
                                _szFilePath.Append(L'\\');
                                _szFilePath.Append(pwzValue);
                            }
                            else
                            {
                                _szFilePath = pwzValue;
                            }
                            _readerInput.Release();
                            pStream.Release();
                            return CheckTimeOut(); //recursion                           
                        }
                        hr = S_OK;//reset otherwise loop stops
                    }                   
                    else if (startCollecting && lstrcmpW(pwzValue, L"add") == 0)
                    {
                       
                        hr = _xmlReader->MoveToAttributeByName(L"key", NULL);
                        if (hr == S_OK)
                        {
                            hr = _xmlReader->GetValue(&pwzValue, &lenValue);
                            //key.Append( pwzValue, lenValue);
                            key = pwzValue;

                            //ATLTRACE(L"found key %s %d\r\n", pwzValue, lenValue);
                            hr = _xmlReader->MoveToAttributeByName(L"value", NULL);
                            if (hr == S_OK)
                            {
                                _xmlReader->GetValue(&pwzValue, NULL);
                                _map.insert(std::pair<CComBSTR, CComBSTR>(key, pwzValue));
                            }
                        }
                    }                   
                }
                break;
            }
        }
        if (SUCCEEDED(hr)) _ftLastCheck = curT;
        if (_xmlReader != NULL)
        {
            _xmlReader->SetInput(NULL);
        }
    
        return S_FALSE;
       
    }

    return S_OK;
   
};

 

Why convert WCF REST services anyway?
First: WCF REST processes JSON using (by default) the DataContractJsonSerializer, while Web API 2 (by default) uses NewtonSoft JSON, which today is the best choice.
Secondly: WCF more or less runs in ASP.NET context, using a HttpContext hack plus routing to a WCF service is much more complex than the easy Attributed like (RoutePrefixAttribute) Web API controller.
Third: Arguably, WCF REST services, simply are from the previous generation .NET. Certainly, I still would use WCF for implementing a SOAP client/server, but not for REST.

So, how can I convert my WCF Rest services to Web API 2 without telling customers to implement changes as well?

Here is my experience, which might gain some time for you.

Just this, my customer already used VB.NET in this project, so don’t blame me for using VB.NET instead of C# :)

The existing Services (called ‘Controllers’ in Web API terms) looked like this…

Service Body definition+attributes

<AspNetCompatibilityRequirements(RequirementsMode:=AspNetCompatibilityRequirementsMode.Required)>
<ServiceContract(Namespace:="
https://mydomain.blah.nl/Service")>
<ServiceBehavior(
     Name:="MobileService",
     ConcurrencyMode:=ConcurrencyMode.Multiple,
     MaxItemsInObjectGraph:=Integer.MaxValue)>
<DataContractFormat()>
Partial Public Class MobileService
     Inherits ServiceBase
….

ServiceBase is just a base class which has some methods for shortening getting the MemberShip User and Booleans like ‘IsAdmin’. For brevity, you don’t need to see it.

Service actions

They look like this:

<OperationContract()>
<WebInvoke(Method:="GET", UriTemplate:="Product/{productid}")>
Public Function GetProduct(productid As String) As Product
    If MembershipUser Is Nothing Then Throw New UnauthorizedAccessException()
    Return New Product(CInt(productid))
End Function

There are several attributes, such as WebGet etc, which you quickly recognize having counterparts in WebAPI2

WCF REST ?

You know, REST = HTTP [ACTION VERB(S)] URL + [body].
WCF had an Attribute AspNetCompatibilityRequirements which enabled you to even have a Session State and to run within the ASP.NET pipeline. However, REST should not have a ‘session state’.

The response can depending on the Http Header Accept be either application/json, or application/xml

JSON is the easiest stuff, because it does not deal with XML namespaces. However, if an endpoint client requests application/xml, the service might return a constructed rootname element, using the controller name as a basename. Such as  <ArrayOfHardwareService.Category xmlns:i="http://www.w3.org/2001/XMLSchema-instance" xmlns=http://schemas.datacontract.org/2004/07/Yadda">

As you see, because my WCF REST controller was named HardwareService, it is being used in the XML output. If you have existing customers, you cannot just modify it to be, say, ‘ArrayOfHardwareController’.

Now, a real service implementor, would advice you to use CollectionDataContract attributes. Please do so, for new from scratch projects, However, again, I don’t want to redefine my existing object model, which can be a lot of work!

TIP 1: Use this excelent hack, which really works like a charm.

http://www.strathweb.com/2013/02/but-i-dont-want-to-call-web-api-controllers-controller/

In VB.NET (I guess, your gasping to see it?) this little helper is like this:

Public Class CustomHttpControllerTypeResolver
      Inherits DefaultHttpControllerTypeResolver
      Public Sub New()
          MyBase.New(Function(T As Type)
                         If T Is Nothing Then Throw New ArgumentNullException("t")
                         Return T.IsClass AndAlso T.IsVisible AndAlso Not T.IsAbstract AndAlso GetType(ApiController).IsAssignableFrom(T) AndAlso GetType(IHttpController).IsAssignableFrom(T)
                     End Function)
      End Sub
  End Class

In Application_OnStart (or so) you add this

Web.Http.GlobalConfiguration.Configuration.Services.Replace(GetType(System.Web.Http.Dispatcher.IHttpControllerTypeResolver), New Api.CustomHttpControllerTypeResolver())

In WebApiConfig use this:

'override the suffix 'Controller' requirement
Dim suffix = GetType(DefaultHttpControllerSelector).GetField("ControllerSuffix", BindingFlags.Static Or BindingFlags.Public)
If suffix IsNot Nothing Then suffix.SetValue(Nothing, String.Empty)
I really like this hack above! Because we don’t need to mess with caching the WebAPI2 controllers ourselves. Which is indeed madness to implement ourselves (mostly).

Now the next challenge. Most companies have JSON/XML services available as service for both end-to-end points, but also as data-source in websites, which consume it using javascript.
In ASP.NET, you probably have some FormsAuthentication mechanism, which is cookie based and optimized for persisting an authenticated session.
WebAPI 2 Controllers do support this, using the Authorize attribute, however, you’ll discover, it does NOT support Basic authentication, which is in combination with SSL, a good candidate for encryption over data for most B-2-B endpoints..

So you need a ‘hack’ to elegantly support BOTH FormsAuthentication, and Basic Authentication. Note, the sample from the Web I used it from, ONLY supports BasicAuthentication, incorrectly calling it ‘Mixed’ support, which it was not. My code however, does support both FormsAuthentication as well as Basic authentication.

Note 1: It does not support the FormsAuthentication challenge sequence, which I don’t need since one normally does not log on using a browser to a JSON Service URL/endpoint. So, MyBase.IsAuthorized(actionContext) does the trick. Thus you don’t have to validate the .aspxauth cookie (Part of FormsAuthentication) yourselves.

Note 2: You must finish the TODO comment, otherwise, the attribute won’t work for you.

TIP 2 Use the attribute below, as a replacement for the Authenticate attribute.

''' <summary> 
''' HTTP authentication filter for ASP.NET Web API
''' </summary>
''' <seealso cref="
http://piotrwalat.net/basic-http-authentication-in-asp-net-web-api-using-membership-provider/"/>
Public MustInherit Class BasicHttpAuthorizeAttribute
    Inherits AuthorizeAttribute

    Private Const BasicAuthResponseHeader = "WWW-Authenticate"
    Private Const BasicAuthResponseHeaderValue = "Basic"

    Public Overrides Sub OnAuthorization(actionContext As HttpActionContext)

        If (actionContext Is Nothing) Then
            Throw New ArgumentNullException("actionContext")
        End If
        If (AuthorizationDisabled(actionContext) OrElse MyBase.IsAuthorized(actionContext) OrElse AuthorizeRequest(actionContext.ControllerContext.Request)) Then
            Return
        End If

        HandleUnauthorizedRequest(actionContext)
    End Sub

    Protected Overrides Sub HandleUnauthorizedRequest(actionContext As HttpActionContext)

        If (actionContext Is Nothing) Then
            Throw New ArgumentNullException("actionContext")
        End If
        actionContext.Response = CreateUnauthorizedResponse(actionContext.ControllerContext.Request)
    End Sub

    Private Shared Function CreateUnauthorizedResponse(request As HttpRequestMessage) As HttpResponseMessage

        Dim result = New HttpResponseMessage() With
                     {
                        .StatusCode = HttpStatusCode.Unauthorized,
                        .RequestMessage = request
                    }

        'we need to include WWW-Authenticate header in our response,
        'so our client knows we are using HTTP authentication
        result.Headers.Add(BasicAuthResponseHeader, BasicAuthResponseHeaderValue)
        Return result
    End Function

    Private Shared Function AuthorizationDisabled(actionContext As HttpActionContext) As Boolean
        'support New AllowAnonymousAttribute
        If Not actionContext.ActionDescriptor.GetCustomAttributes(Of AllowAnonymousAttribute).Any() Then
            Return actionContext.ControllerContext.ControllerDescriptor().GetCustomAttributes(Of AllowAnonymousAttribute).Any()
        Else
            Return True
        End If
    End Function

    Private Function AuthorizeRequest(request As HttpRequestMessage) As Boolean

        Dim authValue = request.Headers.Authorization
        If (authValue Is Nothing OrElse String.IsNullOrWhiteSpace(authValue.Parameter) OrElse
            String.IsNullOrWhiteSpace(authValue.Scheme) OrElse
            authValue.Scheme <> BasicAuthResponseHeaderValue) Then

            Return False
        End If

        Dim parsedHeader = ParseAuthorizationHeader(authValue.Parameter)
        If parsedHeader Is Nothing Then
            Return False
        End If
        Dim principal As IPrincipal = Nothing
        If TryCreatePrincipal(parsedHeader(0), parsedHeader(1), principal) Then

            HttpContext.Current.User = principal
            Return CheckRoles(principal) AndAlso CheckUsers(principal)

        Else
            Return False
        End If
    End Function

    Private Function CheckUsers(principal As IPrincipal) As Boolean

        Dim usrs = UsersSplit
        If usrs.Length = 0 Then Return True
        'NOTE: This is a case sensitive comparison
        Return usrs.Any(Function(u) principal.Identity.Name = u)
    End Function

    Private Function CheckRoles(principal As IPrincipal) As Boolean

        Dim rls = RolesSplit
        If rls.Length = 0 Then Return True
        Return rls.Any(Function(r) principal.IsInRole(r))
    End Function

    Private Shared Function ParseAuthorizationHeader(authHeader As String) As String()

        Dim credentials = Encoding.ASCII.GetString(Convert.FromBase64String(authHeader)).Split(":"c)
        If (credentials.Length <> 2 OrElse String.IsNullOrEmpty(credentials(0)) OrElse
            String.IsNullOrEmpty(credentials(1))) Then
            Return Nothing
        End If
        Return credentials
    End Function

    Protected ReadOnly Property RolesSplit() As String()
        Get
            Return SplitStrings(Roles)
        End Get
    End Property

    Protected ReadOnly Property UsersSplit() As String()
        Get
            Return SplitStrings(Users)
        End Get
    End Property

    Protected Shared Function SplitStrings(input As String) As String()
        If String.IsNullOrWhiteSpace(input) Then Return New String() {}
        Dim result = input.Split(","c).Where(Function(s) Not String.IsNullOrWhiteSpace(s.Trim()))
        Return result.Select(Function(s) s.Trim()).ToArray()
    End Function

    ''' <summary>
    ''' Implement to include authentication logic and create IPrincipal
    ''' </summary>
    Protected MustOverride Function TryCreatePrincipal(user As String, password As String, ByRef principal As IPrincipal) As Boolean
End Class
Public Class MembershipHttpAuthorizeAttribute
    Inherits BasicHttpAuthorizeAttribute

    ''' <summary>
    ''' Implement to include authentication logic and create IPrincipal
    ''' </summary>
    Protected Overrides Function TryCreatePrincipal(user As String, password As String, ByRef principal As IPrincipal) As Boolean

        principal = Nothing
        If Not Membership.ValidateUser(user, password) Then
            Return False
        End If
        Dim rles = Web.Security.Roles.Provider.GetRolesForUser(user)

'TODO: You must assign here your OWN principal       

        'principal = New GenericPrincipal(New GenericIdentity(user), roles)
        Return True
    End Function

End Class

RESULT

Final Controller body

<RoutePrefix("api/blah"), MembershipHttpAuthorize(Roles:=aspnet_Role.blahRole+","+ aspnet_Role.BlahRole2)>
Partial Public Class MobileService
    Inherits Api.ApiBaseController

As you can see, I don’t have the ‘ Controller’  suffix for my Web API2 controller, and I even can use the RoutePrefix attribute. Second, I did not use ‘ Authorize’  attribute, but the mixed MembershipHttpAuthorize attribute.

Controller Actions

' <summary>
' Looks up some data by ID.
' </summary>
<HttpGet, Route("Product/{productid}")>
Public Function GetProduct(productid As Integer) As IHttpActionResult

    Return Ok(New Product(productid))
End Function

I don’t know if WCF could support non-string parameters, I don’t want to know, anyway, as above, you see, it’s quite simple.

In this case, I like to have a function of type IHttpActionResult, because than I easily can return BadRequest or NotFound(). http://www.asp.net/web-api/overviewaa

 

Quirks.

- Sometimes, it seemed that JQuery simply did not behave nicely with a REST / JSON call (this also was the case in the WCF implementation of my client), that only returns HTTP 200 (OK) with no return body. So, I found out, that service reliability improved by returning A value such as Ok(True). So, basically, always define your actions with a specific type, not being ‘void’ or ‘sub’. OK?

- Another issue occurring with HttpPost and HttpPut is when parameters are partly from a Uri and partly from body. WCF could figure this out, but strangely enough, you must help Web Api 2 using attributes FromUriAttribute and FromBodyAttribute. I did not have time to figure out when this was needed, or not but added the attribute.

So:

<HttpPost, Route("Network/{networkid}/GetCustomerConsumer/")>
Public Function GetCustomerConsumer(networkid As Integer, <FromBody> req As GetCustomerConsumerRequest) As IHttpActionResult
    Try
        Return Ok(GetCustomer(networkid, req))
    Catch ex As Exception
        Return BadRequest(ex.Message)
    End Try
End Function

In the sample below, it certainly was necessary to define a ‘ dummy’  class, to pass simple types like status, which is an integer.

<HttpPost, Route("status/{myid}")>
Public Function SetStatus(myid As Integer, <FromBody> dm As Dummy) As IHttpActionResult

Public Class Dummy
     Public remark As String
     Public status As Integer
End Class

- Ironically, the DataContractJsonSerializer was able to convert JSON ‘objects’ back to an interface, say, ICustomer, while NewtonSoft serializer complains about  not being able to cast from an object to ICustomer. It might happen with your project as well, as long as (not sure however) there is no ambiguity on which class it should instantiate for ICustomer. (The KnownTypeAttribute, normally should fix this). The NewtonSoft Serializer, allows you to utilize an attribute. This code below, also might save you some hours research on how to fix that. (Sorry, this time it is C#, yeah)

Here we use the attribute:

[DataMember]
[JsonConverter(typeof(PersonConverter))]
public ICustomer Customer { get; set; }

And here you have the class. Just in case a concept you need it and how you use it!

public class PersonConverter : JsonCreationConverter<Person>
    {
        protected override Person Create(Type objectType, JObject jObject)
        {
            if (FieldExists("Initials", jObject))
            {
                return new Person();
            }
            if (FieldExists("Type", jObject))
            {
                return new Contact();
            }
            return null;
        }
        private static bool FieldExists(string fieldName, JObject jObject)
        {
            return jObject[fieldName] != null;
        }
    }
    public abstract class JsonCreationConverter<T> : JsonConverter
    {
        /// <summary>
        /// Create an instance of objectType, based properties in the JSON object
        /// </summary>
        /// <param name="objectType">type of object expected</param>
        /// <param name="jObject">
        /// contents of JSON object that will be deserialized
        /// </param>
        /// <returns></returns>
        protected abstract T Create(Type objectType, JObject jObject);

        public override bool CanConvert(Type objectType)
        {
            return typeof(T).IsAssignableFrom(objectType);
        }

        public override object ReadJson(JsonReader reader,
                                        Type objectType,
                                         object existingValue,
                                         JsonSerializer serializer)
        {
            // Load JObject from stream
            var jObject = JObject.Load(reader);

            // Create target object based on JObject
            var target = Create(objectType, jObject);

            // Populate the object properties
            serializer.Populate(jObject.CreateReader(), target);

            return target;
        }

        public override void WriteJson(JsonWriter writer,
                                       object value,
                                       JsonSerializer serializer)
        {
            //default easy muke
            serializer.Serialize(writer, value);
        }
    }

Annother quirk that might bite you, is the fact that a WCF REST service, defaults to return application/xml content while Web API defaults to application/json. If a client application did not specify the Accept Header or even specifies ‘text/html’. This line below fixes the default output again to application/xml.

config.Formatters.XmlFormatter.SupportedMediaTypes.Add(New MediaTypeHeaderValue("text/html"))

WebApi Config

You need to adapt JSON serialization as well. Try to keep it using the NewtonSoft.Json serializer, instead of the  json.UseDataContractJsonSerializer=true!

You need to set MicrosoftDateFormat to be compatible with the WCF REST JSON output (instead of ISO). Second, you need to output Null Values as well. There also is an issue, with TimeZone support in WCF, which is unspecified I believe, which leads to crazy bugs with DateTime output, anyway, without solving that WCF issue in this article, you need ‘ Unspecified’  here as well.

Another nice feature while debugging/developing, is Indented JSON which allows you to easily read your JSON output using your favorite Browser Netwerk trace.

   Public Class WebApiConfig
        Public Shared Sub Register(ByVal config As HttpConfiguration)
            config.MapHttpAttributeRoutes()

            Dim json = config.Formatters.JsonFormatter
            json.SerializerSettings.DateFormatHandling = Newtonsoft.Json.DateFormatHandling.MicrosoftDateFormat          
            json.SerializerSettings.NullValueHandling = Newtonsoft.Json.NullValueHandling.Include
            json.SerializerSettings.DateTimeZoneHandling = Newtonsoft.Json.DateTimeZoneHandling.Unspecified
#If DEBUG Then
            json.SerializerSettings.Formatting = Newtonsoft.Json.Formatting.Indented
#End If
            'override the suffix 'Controller' requirement
            Dim suffix = GetType(DefaultHttpControllerSelector).GetField("ControllerSuffix", BindingFlags.Static Or BindingFlags.Public)
            If suffix IsNot Nothing Then suffix.SetValue(Nothing, String.Empty)  

        End Sub
    End Class

 

I just want to share this function, since there are a lot of version around, which are not resistant against zero positions advance, for instance, if you split ‘1,2,3’ into a table, it would be find, but what if one element is empty, such as ‘1,,3’? This function deals with it setting returning a null element.

Usage:

SELECT * FROM [udf_SplitVarchar2Table]('one,two,three', ',')

returns:

ALTER FUNCTION [dbo].[udf_SplitVarchar2Table]
(
    @List varchar(max),
    @delimiter VARCHAR(10)
)

RETURNS
    @Values TABLE(col VARCHAR(512))
AS

BEGIN 
    IF @List IS NULL OR LEN(@List) = 0 RETURN;
 
  SET @List = replace(@List,CHAR(39)+CHAR(39),CHAR(39))
 
  DECLARE @Index INT=1; 
  DECLARE @ItemValue varchar(100);  
  DECLARE @pos INT = 1;
  DECLARE @l INT = LEN(@List);

  WHILE @Index > 0   
    BEGIN        
      SET @Index = CHARINDEX(@Delimiter,@List, @pos);  
   
      IF @Index  > 0 
            IF (@index- @pos> 0)
                SET @ItemValue = SUBSTRING(@List,@pos, @index- @pos );
            ELSE
                SET @ItemValue=NULL;
      ELSE
        IF (@l-@pos+1)>0
            SET @ItemValue =SUBSTRING( @List, @pos, @l-@pos+1) ;
        ELSE
            SET @ItemValue = NULL;

      INSERT INTO @Values (col) VALUES (@ItemValue);    
      SET @pos = @index+1;
    END
    RETURN;
END

 

There are a lot of ways to read and parse HTML, the better tricks, don’t use IE itself, since this will deliver automation errors and waste memory.

I’m for 99% of my time into .NET programming, but still, one of my hobbies use an Access 2013 database and thus, a VBA codebase, yummy! And to get powerfeatures, I compiled a tlb to have interfaces like IPersistStreamInit, IStream etc. (it’s called odl compiling and requires  MkTypLib.EXE, not midl.exe!)

Now here is a neat way to fetch/get a plain HTML text and load it into a HTMLDocument without any dependency on IE automation. You’re a smart non-lazy programmer (right?) so you get the idea for C# as well since you need IPersistStreamInit there as well. It’s COM interop, dude!

Public Function HttpGet(ByRef url As String) As mshtml.HTMLDocument
    Dim xmlHttp As MSXML2.ServerXMLHTTP60
    Set xmlHttp = CreateObject("MSXML2.ServerXMLHTTP")
    xmlHttp.Open "GET", url, False
    xmlHttp.send
   'set return value
    Set HttpGet = New HTMLDocument
    Dim stream As adodb.stream
    Set stream = CreateObject("ADODB.Stream")
    Dim istrea As IPersistStreamInit
   
   'get interface IPersistStreamInit from HTMLDocument
    Set istrea = HttpGet
   
   'write the muke using a binary array (bytes)
    stream.Type = adTypeBinary
    stream.Open
    stream.write xmlHttp.responseBody
   'reset stream
    stream.position = 0
    'load the muke into the HTMLDocument
    istrea.Load stream

    Dim s As Single
    s = Timer

   'fake body onload ready
    Do Until Timer - s > 10 Or HttpGet.ReadyState = "complete"
        DoEvents        
    Loop

End Function

I found an easy way to have binary parameters as base64 encoded string. You might wonder, why bother?

Well, in a a well used environment, size and compactness of data over the wire, still matters! Because a binary value is sent as a hexadecimal over the wire; Hexadecimals are 4 times the size of one byte. Base64 encoded strings however, just need +/- 3 times the size of one byte. .

example:

EXEC proc_receiveMyBlob 0xA05FDAF  (etc) 

 The stored procedure itself would have this signature:

CREATE PROC  @myBLob varbinary(max)    -- or image whatever

BEGIN 

   INSERT INTO tblMyBlobs VALUES(@myBlob);

END 

The trick:

 CREATE PROC  @myBLob xml  -- <-- use the xml T-SQL data type

BEGIN 

-- remember, the binary field in SQL must not be changed to xml, keep it as binary! 

   INSERT INTO tblMyBlobs VALUES(@myBlob.value('xs:base64Binary(.)', 'varbinary(max)') );

END  

 

The call to the stored proc (obviously) looks something like this:

EXEC proc_receiveMyBlob 'SGVsbG8gQmFzZTY0' 

or if you like:

EXEC proc_receiveMyBlob '<data>SGVsbG8gQmFzZTY0</data>' 

 

 

If you’re a site admin or asp.net developer for an internet site, you certainly need to look into sitemaps, if you want to perform SEO.

It’s not necessary to  simply crawl your own site and then to give every page a priority, but consider this for a forum or other pages which are irregularly or often updated. If you don’t want to have crawlers do unneeded roundtrips, implement a sitemap.

‘robots.txt’ should contain a reference to your map eg. Sitemap: http://www.myfantasticsite.com/sitemap.xml

Ideas of this class, written using C#, can be found anywhere on the net. However, as some might know me, I like it to be finished and neat and a self-supporting class ready for usage (e.g. it must not be written to a string to add or remove wished attributes that the serializer could not handle).

The following things are solved. 
Since  ‘changefreq’ and ‘lastmod’ and ‘priority’ are optional values, you don’t want the XmlSerializer to create empty tags!
This is done by adding a DefaultValue attribute. It will cause XmlSerializer to check the current value against the default value. If they are equal, it is considered to be an empty non existing tag. Remember, that the defaultvalues need to be out of the range of possible values! Therefore, EnumChangeFreq contains an extra member ‘notused’
Remember, the Xml.Serialization name space, offers the tools to get it done without converting your loading your XML in to some XmlDocument class.

You can use the class as follows.
UrlSet retVal = new UrlSet();

retVal.AddUrl(new Url() { LastModifiedDateTime = DateTime.Now.ToUniversalTime(), Loc = “http://www.myfantasticsite.com/blah.aspx”) });

Retrieve the XML sitemap string.
string xml = null;
using (var io = (MemoryStream)retVal.ToStream())
{
                xml =  new UTF8Encoding().GetString(io.ToArray());
}

or, to write it directly to an output stream

using (var io = (MemoryStream)retVal.ToStream())
{

//todo: Deal with Response.Cache, etag and last modified to avoid unnecessary round trips.
Response.ContentType = “text/xml”;
Response.CharSet = “utf-8”;
Response.BinaryWrite(retVal.ToArray());

}

using System;
using System.Xml;
using System.Xml.Serialization;
using System.ComponentModel;
using System.IO;

namespace adccure
{

    public enum EnumChangeFreq
    {
        notset,
        always,
        hourly,
        daily,
        weekly,
        monthly,
        yearly,
        never
    }

[

[XmlRoot(ElementName = "urlset", Namespace = SCHEMA_SITEMAP)]
public sealed class UrlSet
{
    [XmlNamespaceDeclarations]
    public XmlSerializerNamespaces xmlns;
    private const string XSI_NAMESPACE = "http://www.w3.org/2001/XMLSchema-instance";
    private const string SCHEMA_SITEMAP = "http://www.sitemaps.org/schemas/sitemap/0.9";

    private Url[] _url;

    public UrlSet()
    {
        _url = new Url[0];
        xmlns = new XmlSerializerNamespaces();
        xmlns.Add("xsi", XSI_NAMESPACE);
        SchemaLocation = SCHEMA_SITEMAP + " " + "http://www.sitemaps.org/schemas/sitemap/0.9/sitemap.xsd";

    }
    [XmlAttribute(AttributeName = "schemaLocation", Namespace = XSI_NAMESPACE)]
    public string SchemaLocation;

    public void AddUrl(Url url)
    {
        int l = _url.Length + 1;
        Array.Resize(ref _url, l);
        _url[l - 1] = url;
    }

    [XmlElement(ElementName = "url")]
    public Url[] url
    {
        get { return _url; }
        set { _url = value; } //bogus
    }
    /// <summary>
    /// serializes the UrlSet to a sitemap.xsd conform string ready for saving to disk.
    /// </summary>
    /// <returns>a Stream object</returns>
    public Stream ToStream()
    {
        XmlSerializer xmlser = new XmlSerializer(GetType());
        var io = new MemoryStream();
        xmlser.Serialize(new StreamWriter(io, Encoding.UTF8), this);
        io.Position = 0;
        return io;
    }
}

    public sealed class Url
    {
        private string _loc;
        private DateTime _lastmod;
        private float _priority;
        private EnumChangeFreq _changefreq;

        public Url()
        {
            //setting defaults
            _changefreq = EnumChangeFreq.notset;
            _priority = 0.0F;
            _lastmod = DateTime.MinValue;
        }

        [XmlElement(ElementName = "loc")]
        public string Loc
        {
            get
            {
                return _loc;
            }

            set
            {
                if (string.IsNullOrEmpty(value))
                {
                    throw new ArgumentNullException();
                }
                if (value.Length < 12 || value.Length > 2048)
                {
                    throw new ArgumentException("loc must be between 12 and 2048 in length");
                }
                _loc = value;
            }
        }
        [XmlElement(ElementName = "lastmod"), DefaultValue(typeof(DateTime), "1-1-0001")]
        public DateTime LastModifiedDateTime
        {
            get
            {
                return _lastmod;
            }

            set
            {
                _lastmod = new DateTime(value.Year, value.Month, value.Day, value.Hour, value.Minute, value.Second, value.Kind);

            }
        }
        [XmlElement(ElementName = "changefreq")]
        [DefaultValue(EnumChangeFreq.notset)]
        public EnumChangeFreq ChangeFreq
        {
            get
            {
                return _changefreq;
            }

            set
            {
                _changefreq = value;
            }
        }
        [XmlElement(ElementName = "priority")]
        [DefaultValue(0.0F)]
        public float Priority
        {
            get
            {
                return _priority;
            }

            set
            {
                if (value < 0 || value > 1.0)
                {
                    throw new ArgumentException("Must be between 0 and 1");
                }
                _priority = value;
            }
        }
    }
}

Tags van Technorati: ,,

Figure 1: Our custom pager in action!

I never have liked the concept of storing all the data in whatever form (DataTable/Lists of records/etc.) to the ASP.NET gridview control and having it automatically manage paging for me. This could however be improved using Visual Studio 2008 wizards. However, this requires writing stored procedures.

I’ve got a concept for you, which works without a lot of extra work. The concept is:

  1. Inherit from asp:GridView and override the PageCount and the PageIndex properties
  2. Create in instance of my CustomPager class at DataBinding time.

The result is as shown in figure 1: It adheres to PageButtonCount, to any styles that you have applied to the GridView and it features a ‘jump to’ page input box.
The ‘native’ event handling in your ASPX, still can be maintained by this code since it emulates the PageIndexChanging event.

Other solutions, implement an AJAX updatepanel per row. This really minimizes unnecessary refreshing of grid data.
However, I don’t mind if say, 30 rows are being pulled from a DB and bound to a GridView. If we do so, we also get the best of two worlds, in one world, we get all data and have a fresh update of the real table rows, and in the other world, we have just the active row being fetched and we could end up having outdated data on our screen because of (for instance) co-writers/typers who updated rows at the database which are not reflected at our screen in grid (which is displayed using a gridview).

So, in other words, I like this pager control since it is a balanced solution! It has been tested on 10,000 records. WHen I would page to for instance, page 900, it really is a matter of fractions of a second to get a response. (For paging solutions  on SQL server data tables which contain say millions of rows, we would need a more sophisticated approach).

Here is the control source (it’s called gridhelper.cs)! (In theory, it should work for .NET 2.0 up to 3.5)

// if you use this code, please leave the original author
//author: Egbert Nierop
using System;
using System.Web;
using System.Web.UI;
using System.Web.UI.WebControls;

namespace adccure.tools
{
    public sealed class GridView2 : GridView
    {
        public GridView2()
            : base()
        {
        }
        private int _pageCount;

        public override int PageIndex
        {
            get
            {
                object o = ViewState["pgi"];
                return o == null ? 0 : (int)o;
            }
            set
            {
                ViewState["pgi"] = value;
            }
        }

        public override int PageCount
        {
            get
            {
                return _pageCount;
            }

        }
        public void PageCountSet(int pageCount)
        {
            _pageCount = pageCount;
        }
    }

    public sealed class CustomPager : ITemplate
    {
        readonly Table tbl;
        readonly GridView2 _grid;

        //readonly int _totalPages;
        public CustomPager(GridView2 grid, int totalPages)
        {
            tbl = new Table();
            grid.PageCountSet(totalPages);
            tbl.Width = Unit.Percentage(100);
            _grid = grid;
        }
        void ITemplate.InstantiateIn(Control container)
        {
            container.Controls.Add(tbl);

            int pageSize = _grid.PageSize;
            int pageIndex = _grid.PageIndex;
            int pageButtonCount = _grid.PagerSettings.PageButtonCount;
            int pageCount = _grid.PageCount;
            ClientScriptManager cs = _grid.Page.ClientScript;
            _grid.PagerSettings.Visible = true;
            var trow = new TableRow();
            var trowpagePosition = new TableRow();
            tbl.Rows.Add(trow);
            tbl.Rows.Add(trowpagePosition);
            TextBox tb = new TextBox();
            tb.ID = "txtJumpPage";
            tb.MaxLength = 4;
            tb.Width = Unit.Pixel(40);
            tb.Text = (pageIndex + 1).ToString();
            //avoid bubble up by return false
            tb.Attributes["onkeydown"] = string.Format("if (event.keyCode==13) {__doPostBack('{0}', 'Page$' + this.value ); return false;}", _grid.UniqueID);

            LiteralControl lit = new LiteralControl(string.Format(" of {0}", (pageCount + 1).ToString()));
            TableCell posCaption = new TableCell();
            trowpagePosition.Cells.Add(posCaption);
            posCaption.Controls.Add(tb);
            posCaption.Controls.Add(lit);
            int cellspan = 0;
            if (pageIndex > 0)
            {
                var cellText = new TableCell();
                trow.Cells.Add(cellText);
                cellspan++;
                cellText.Controls.Add(new HyperLink()
                {
                    NavigateUrl = cs.GetPostBackClientHyperlink(_grid,
                    string.Format("Page${0}", pageIndex - pageButtonCount >= 0 ? (pageIndex - pageButtonCount) + 1 : 1), false),
                    Text = "<"
                });
            }
            for (int x = pageIndex; x < pageIndex + pageButtonCount && x <= pageCount; x++)
            {
                var cellText = new TableCell();
                cellspan++;
                trow.Cells.Add(cellText);
                cellText.Controls.Add(new HyperLink()
                {
                    NavigateUrl = cs.GetPostBackClientHyperlink(_grid,
                        string.Format("Page${0}", x + 1), false),
                    Text = (x + 1).ToString(),
                });

            }
            if (pageIndex + pageButtonCount < pageCount)
            {
                var cellText = new TableCell();
                cellspan++;
                trow.Cells.Add(cellText);

                cellText.Controls.Add(new HyperLink()
                {
                    NavigateUrl = cs.GetPostBackClientHyperlink(_grid,
                    string.Format("Page${0}", (pageIndex + pageButtonCount) + 1), false),
                    Text = ">"
                });
            }
            tbl.Visible = true;
            posCaption.HorizontalAlign = HorizontalAlign.Center;
            posCaption.ColumnSpan = cellspan;
        }
    }
}

Now, I don’t publish the code to read from sample data (such as northwind), it would be a yadda, yadda and you know the drill. (In my code, it is just a silly HttpReferrer table having all the columns which allow you research this specific statistical interest of your web site)
But in my datalayer, I have things like shown below. It is a great solution for those tables, for say, less than 100,000 records. SQL server is able to deal with these types of queries pretty well and we still avoid pumping around lots of redundant data on the network and gridview control.

public IList<HttpReferrer> getHttpReferrers(int pPage, int pPageSize, HttpReferrerSortOrder sortOrder,
                SortDirection sortDirection,
                out int totalRecords)
        {
            totalRecords = dcd.HttpReferrers.Count();
            IQueryable<HttpReferrer> retVal = null;
            if (sortDirection ==  SortDirection.Ascending)
            {
                switch (sortOrder)
                {
                    case HttpReferrerSortOrder.Referer:
                        retVal = dcd.HttpReferrers.OrderBy(t => t.Referrer);
                        break;
                    case HttpReferrerSortOrder.IP:
                        retVal = dcd.HttpReferrers.OrderBy(t => t.IP_Address);
                        break;
                    case HttpReferrerSortOrder.Page:
                        retVal = dcd.HttpReferrers.OrderBy(t => t.page);
                        break;
                    default:
                        retVal = dcd.HttpReferrers.OrderBy(t => t.ts);
                        break;
                }
            }
            else
            {
                switch (sortOrder)
                {
case HttpReferrerSortOrder.Referer:
                        retVal = dcd.HttpReferrers.OrderByDescending(t => t.Referrer);
                        break;
                    case HttpReferrerSortOrder.IP:
                        retVal = dcd.HttpReferrers.OrderByDescending(t => t.IP_Address);
                        break;
                    case HttpReferrerSortOrder.Page:
                        retVal = dcd.HttpReferrers.OrderByDescending(t => t.page);
                        break;
                    default:
                        retVal = dcd.HttpReferrers.OrderByDescending(t => t.ts);
                        break;
                }
            }
            return retVal.Skip(pPage * pPageSize).Take(pPageSize).ToList();
        }

So, our Grid, can sort and it can page.

How do we deal with the paging event at the code behind the aspx?
It’s so simple!

void gridHttpReferrers_PageIndexChanging(object sender, GridViewPageEventArgs e) 

gridHttpReferrers.PageIndex = e.NewPageIndex; 
gridHttpReferrers.DataBind(); 
}

void gridHttpReferrers_DataBinding(object sender, EventArgs e)

{   int totalRecords; 
   int pageSize = gridHttpReferrers.PageSize; 
   int pageIndex = gridHttpReferrers.PageIndex; 
  gridHttpReferrers.DataSource = datadal.getHttpReferrers(pageIndex, pageSize, SortOrder, SortDir, out totalRecords); 
   gridHttpReferrers.PagerTemplate = new CustomPager(gridHttpReferrers, totalRecords); 
   gridHttpReferrers.PageCountSet (totalRecords / pageSize);
}

What did I do to get the new grid behavior inside the ASPX?
Just rename the tag from asp:GridView to ctrl:GridView2 and create the reference (or in web.config)

<%@ Register TagPrefix="ctrl" Namespace="adccure.tools" %>

<ctrl:GridView2 runat="server" emptydatatext="No data available."  ID="gridHttpReferrers" AllowPaging="true" AllowSorting="True"  Width ="100%" AutoGenerateColumns="false" PageSize="20" DataKeyNames="ID"><PagerSettings Position="Top" PageButtonCount="20" />
<
Columns> ETC.

So, I hope this code was quite enlightning for you and you can play with it and have fun.

Sometimes, especially for files running on external FTP servers, where file names are case sensitive, a file named myFILE.html, is not the same file as myfile.html in the same path!

This function, can be used on an NTFS path for that purpose where File.Exists would fail, because it is case insensitive. (However, it does not enable you to have two files with just a different case in the same path)

/// <summary>
/// Checks existance of file using a case sensitive compare
/// </summary>
/// <param name="file">must be full filename</param>
/// <returns></returns>
static bool FileExists(string file)

{

string pathCheck = Path.GetDirectoryName(file);

string filePart = Path.GetFileName(file);

      if (string.IsNullOrEmpty(pathCheck))

      {

      throw new ArgumentException("The file must include a full path", file);

      }

      string[] checkFiles = Directory.GetFiles(pathCheck, filePart, SearchOption.TopDirectoryOnly);

      if (checkFiles != null && checkFiles.Length > 0)

      {

            //must be a binary compare

            return Path.GetFileName(checkFiles[0]) == filePart;

       }

       return false;

}

 

I've seen people pulling the hair out for not getting this API workign for them.

The API, even if impersonating the current user, returns error 1309. "An attempt has been made to operate on an impersonation token by a thread that is not currently impersonating a client."

The clue is that this API, (and this is not clearly documented on the MSDN) needs a duplicated handle.

Anyway, spare your hair, have fun with the code. B.t.w. You can hire me for smart code and research on components etc..

http://www.adccure.nl for contact.

    [StructLayout(LayoutKind.Sequential)]

    internal struct GENERIC_MAPPING

    {

        internal uint GenericRead;

        internal uint GenericWrite;

        internal uint GenericExecute;

        internal uint GenericAll;

    }

    [DllImport("advapi32.dll", SetLastError = false)]

    static extern void MapGenericMask([In, MarshalAs(UnmanagedType.U4)] ref TokenAccessLevels AccessMask,

        [In] ref GENERIC_MAPPING map);

   

    [DllImport("advapi32.dll", SetLastError = true)]

    [return: MarshalAs(UnmanagedType.Bool)]

    public static extern bool DuplicateToken(IntPtr ExistingTokenHandle,

            [MarshalAs(UnmanagedType.U4)] TokenImpersonationLevel level,

            out int DuplicateTokenHandle);

    [DllImport("advapi32.dll", SetLastError = true)]       

     [return: MarshalAs(UnmanagedType.Bool)] static extern bool AccessCheck(

      [MarshalAs(UnmanagedType.LPArray)]

        byte[] pSecurityDescriptor, 

      IntPtr ClientToken,

      [MarshalAs(UnmanagedType.U4)]

        TokenAccessLevels accessmask,

      [In] ref GENERIC_MAPPING GenericMapping,

      IntPtr PrivilegeSet,

      ref int PrivilegeSetLength,

      out uint GrantedAccess,

      [MarshalAs(UnmanagedType.Bool)]

      out bool AccessStatus);

    [DllImport("kernel32")]

    static extern void CloseHandle(IntPtr ptr);

  internal static bool hasReadAccess(string path)

    {

        // Obtain the authenticated user's Identity

       

        WindowsIdentity winId = WindowsIdentity.GetCurrent(TokenAccessLevels.Duplicate | TokenAccessLevels.Query);

       

        WindowsImpersonationContext ctx = null;

        int statError = 0;

        IntPtr dupToken = IntPtr.Zero;

        try

        {

            // Start impersonating

            //ctx = winId.Impersonate(); works but AccessCheck does not like this

           

            int outPtr;

            //AccessCheck needs a duplicated token!

            DuplicateToken(winId.Token, TokenImpersonationLevel.Impersonation, out outPtr);

           

            dupToken = new IntPtr(outPtr);

            ctx = WindowsIdentity.Impersonate(dupToken);                

            Folder.GENERIC_MAPPING map = new Folder.GENERIC_MAPPING();

            map.GenericRead = 0x80000000;

            map.GenericWrite = 0x40000000;

            map.GenericExecute = 0x20000000;

            map.GenericAll = 0x10000000;

            TokenAccessLevels required = TokenAccessLevels.Query | TokenAccessLevels.Read | TokenAccessLevels.AssignPrimary | (TokenAccessLevels)0x00100000; // add synchronization

            MapGenericMask(ref required, ref map);

           

           

            uint status = 0;

            bool accesStatus = false;

            // dummy area the size should be 20 we don't do anything with it

            int sizeps = 20;

            IntPtr ps = Marshal.AllocCoTaskMem(sizeps);

           

            //AccessControlSections.Owner | AccessControlSections.Group MUST be included,

            //otherwise the descriptor would be seen with ERROR 1338

            var ACE = Directory.GetAccessControl(path,

                AccessControlSections.Access | AccessControlSections.Owner |

                    AccessControlSections.Group);

           

            bool success = AccessCheck(ACE.GetSecurityDescriptorBinaryForm(), dupToken, required, ref map,

                    ps, ref sizeps, out status, out accesStatus);

            Marshal.FreeCoTaskMem(ps);

            if (!success)

            {

                statError = Marshal.GetLastWin32Error();

            }

            else

            {

                return accesStatus;

            }

        }

        // Prevent exceptions from propagating

        catch (Exception ex)

        {

            Trace.Write(ex.Message);

        }

        finally

        {

            // Revert impersonation

           

            if (ctx != null)

                ctx.Undo();

            CloseHandle(dupToken);

        }

        if (statError != 0)

        {

            throw new Win32Exception(statError);

        }

       

        return false;

    }

This code is just a cut and paste. You can make it pretty.

Just for educational purpose (as for myself as well :) ) I post this code.

Using this, a programmer can use a good practise, that is to use the culture information which is built in, into .NET instead of making that data him/herself.

based upon the following element in web.Config, the list will fill using the correct number of monthnames. It also keeps track of calendars, that have 13 months in some cultures.

<globalization uiCulture="nl-nl"/>

using System;

using System.Collections.Generic;

using System.Web;

using System.Linq;

using System.Web.UI;

using System.Web.UI.WebControls;

using System.Threading;

using System.Globalization;

public partial class _Default : System.Web.UI.Page

{

    protected void Page_Load(object sender, EventArgs e)

    {

        int monthNumber = 0;

        CultureInfo ci = Thread.CurrentThread.CurrentUICulture;

        var myMonthnames = ci.DateTimeFormat.MonthNames

            .Take(ci.Calendar.GetMonthsInYear(DateTime.Today.Year)).Select(p => new { monthNo = ++monthNumber, monthName = p });

        ddlMonthnames.DataTextField = "monthName";

        ddlMonthnames.DataValueField = "monthNo";

        ddlMonthnames.DataSource = myMonthnames;

        ddlMonthnames.DataBind();

    }

}

<select name="ddlMonthnames" id="ddlMonthnames">

<option value="1">januari</option>

<option value="2">februari</option>

<option value="3">maart</option>

<option value="4">april</option>

<option value="5">mei</option>

<option value="6">juni</option>

<option value="7">juli</option>

<option value="8">augustus</option>

<option value="9">september</option>

<option value="10">oktober</option>

<option value="11">november</option>

<option value="12">december</option>

</select>

I was wondering if writing well known C# (or VB.NET if you wish) code flow statements, such as for .. and foreach etc. are faster or slower compared to a generic expression.

The results are refreshing  Big Smile. At least using this simple array iteration. Array Iterations on millions of elements, of course, are not the the real life CPU eaters for an average ASP.NET website (eg), but consider this code.

There are three loops, doing the same.  (n.b.: I run a dual core i7200 CPU machine on Vista x64)

int ctr = 0;

var values = new string[1000000].

        Select(p => p = (ctr++).

        ToString()).

        ToArray();

List<int> intList;

intList = values.Select(p => int.Parse(p)).ToList();

int[] test1, test2, test3;

 // loop 10 times and calculate the average

test1 = new int[10];

test2 = new int[10];

test3 = new int[10];

for (int zz = 0; zz < 10; zz++)

{

    // our millisecond counter

    // it's ok to run this test several times to get an average score

    int start = Environment.TickCount;

    // convert the numeric array back to an int array

    intList = values.Select(p => int.Parse(p)).ToList();

    test1[zz] = Environment.TickCount - start;

    //now do the same but using a foreach iteration

    start = Environment.TickCount;

    intList = new List<int>();

    foreach (var p in values)

    {

        intList.Add(int.Parse(p));

    }

    test2[zz] = Environment.TickCount - start;

   //do it a last time, but with a for{} iteration

    // theoretically a this should save us an enumerator.

    start = Environment.TickCount;

    intList = new List<int>();

    int z = values.Length; 

    for (int x = 0; x < z; x++)

    {

        intList.Add(int.Parse(values[x]));

    }

    test3[zz] = Environment.TickCount - start;

    Console.WriteLine("{0}, {1}, {2}", test1[zz], test2[zz], test3[zz]);

   

}

Console.WriteLine("{0}, {1}, {2}", test1.Average(), test2.Average(), test3.Average());

Console.ReadKey();

To test this, run this code in release mode (plus Ctrl-F5, in Visual Studio).

x64 CPU platform results:
Test 1) 175ms
Test 2) 154 ms
Test 3) 155 ms

x86 CPU platform results:
Test 1) 198 ms
Test 2) 161 ms
Test 3) 169 ms

Test 1 uses an generic expression to 'cast'  a numeric string array to a List of type Int32.
Cute line, isn't it?
But unfortunately, as has been said, the LINQ expression, still is a bit slower than the non-linq versions.

How much slower when we deal with integer math and avoid any parsing overhead?

Now if we replace the int.Parse() statement by a silly integer operation, such as
intList = intvalues.Select(p => p -1 ).ToList(); we have to increase the loopcount to 10,000,000 to get the workload to any significance. Now we measure plain LINQ performance.

x64 results:
Test 1) 269 ms
Test 2) 125 ms
Test 3) 128 ms

x86 results:
Test 1) 276 ms
Test 2) 126 ms
Test 3) 121 ms

Conclusion:
I expect over time that compilers become more and more smarter and can optimize even better LINQ expressions. 
However, as we saw in the first example, int.Parse already flattened the results and the relative slowness of LINQ greatly.
Parsing and converting data in loops, is something we constantly do when we deal with XML and databases. So when the workload within the loop increases, the overhead of LINQ expressions quickly become no important factor. 

So, for shorter code, I would not hesitate to use those expressions as in Test 1.
In a real life business application, the performance of loops really does not determine the final user experience. It is (e.g.) how we access a database or other resources, such as XML.

It would be another story, if we e.g. were doing 3D math animations, where C++ would be an obvious choice.

 

LINQ! Yes, Also I fell in love with linq. 

So here is my first try. And as you fans know, I like to dig into subjects.

Maybe, the title is a little incorrect, but I wanted a query that returns one record, using a subquery in one statement!

Using SQL syntax, this would look like the query just below.: Yes, It seems I can dream SQL, but it was shocking to see how I underestimated the LINQ syntax which took some extra hears from my head.

The query returns events, that have not already been booked full. (There are still some places left)

SELECT [t0].[id], [t0].[maxNumberofParticipants], [t0].[OwerId], [t0].[Description], [t0].[StartTime], [t0].[EndTime],
[t0].[orderId] FROM [dbo].[Activity] AS [t0]
WHERE ([t0].[OwerId] = @p0) AND ([t0].[maxNumberofParticipants] > @p1) AND ([t0].[maxNumberofParticipants] > (ISNULL((
    SELECT SUM([t2].[value])
    FROM (
        SELECT [t1].[countOfPersons] AS [value], [t1].[activityId]
        FROM [dbo].[ActivityParticipant] AS [t1]
        ) AS [t2]
    WHERE [t2].[activityId] = [t0].[id]
    ),0)))

I could make this a stored proc, and execute this easily. But that was in my previous life.

So, how to express this using a LINQ query?

The trick with the ISNULL function is to -cast- the countOfPersons field, to a nullable type! Since the GetValueOrDefault() is available only to nullable data types we must cast it to an int? datatype.
The LINQ provider, will translate it finally, when it sends the actual SQL to SQL Server using TSQL function 'COALESC'.
bt.w., it will not use the ISNULL function. 

public IEnumerable<Activity> getActivityParticipableByOwnerWith(Guid ownerGuid)
{

var query = from a in Activities
where a.OwerId == ownerGuid && a.maxNumberofParticipants > 0 && a.maxNumberofParticipants >
ActivityParticipants.Where(aid => aid.activityId == a.id).Sum(aid => (int?)aid.countOfPersons).GetValueOrDefault(0)
select a ;
 

return query;
}

Kuddo's to Khaled Moawad, who very patiently helped me to improve the syntax.  b.t.w. guys/girls. Always try it hard yourself, before you ask the global community for support. That makes your brains retain better :)

My next goal is to make LINQ like a natural language for me, like SQL was :)

More Posts Next page »