I was wondering if writing well known C# (or VB.NET if you wish) code flow statements, such as for .. and foreach etc. are faster or slower compared to a generic expression.

The results are refreshing  Big Smile. At least using this simple array iteration. Array Iterations on millions of elements, of course, are not the the real life CPU eaters for an average ASP.NET website (eg), but consider this code.

There are three loops, doing the same.  (n.b.: I run a dual core i7200 CPU machine on Vista x64)

int ctr = 0;

var values = new string[1000000].

        Select(p => p = (ctr++).



List<int> intList;

intList = values.Select(p => int.Parse(p)).ToList();

int[] test1, test2, test3;

 // loop 10 times and calculate the average

test1 = new int[10];

test2 = new int[10];

test3 = new int[10];

for (int zz = 0; zz < 10; zz++)


    // our millisecond counter

    // it's ok to run this test several times to get an average score

    int start = Environment.TickCount;

    // convert the numeric array back to an int array

    intList = values.Select(p => int.Parse(p)).ToList();

    test1[zz] = Environment.TickCount - start;

    //now do the same but using a foreach iteration

    start = Environment.TickCount;

    intList = new List<int>();

    foreach (var p in values)




    test2[zz] = Environment.TickCount - start;

   //do it a last time, but with a for{} iteration

    // theoretically a this should save us an enumerator.

    start = Environment.TickCount;

    intList = new List<int>();

    int z = values.Length; 

    for (int x = 0; x < z; x++)




    test3[zz] = Environment.TickCount - start;

    Console.WriteLine("{0}, {1}, {2}", test1[zz], test2[zz], test3[zz]);



Console.WriteLine("{0}, {1}, {2}", test1.Average(), test2.Average(), test3.Average());


To test this, run this code in release mode (plus Ctrl-F5, in Visual Studio).

x64 CPU platform results:
Test 1) 175ms
Test 2) 154 ms
Test 3) 155 ms

x86 CPU platform results:
Test 1) 198 ms
Test 2) 161 ms
Test 3) 169 ms

Test 1 uses an generic expression to 'cast'  a numeric string array to a List of type Int32.
Cute line, isn't it?
But unfortunately, as has been said, the LINQ expression, still is a bit slower than the non-linq versions.

How much slower when we deal with integer math and avoid any parsing overhead?

Now if we replace the int.Parse() statement by a silly integer operation, such as
intList = intvalues.Select(p => p -1 ).ToList(); we have to increase the loopcount to 10,000,000 to get the workload to any significance. Now we measure plain LINQ performance.

x64 results:
Test 1) 269 ms
Test 2) 125 ms
Test 3) 128 ms

x86 results:
Test 1) 276 ms
Test 2) 126 ms
Test 3) 121 ms

I expect over time that compilers become more and more smarter and can optimize even better LINQ expressions. 
However, as we saw in the first example, int.Parse already flattened the results and the relative slowness of LINQ greatly.
Parsing and converting data in loops, is something we constantly do when we deal with XML and databases. So when the workload within the loop increases, the overhead of LINQ expressions quickly become no important factor. 

So, for shorter code, I would not hesitate to use those expressions as in Test 1.
In a real life business application, the performance of loops really does not determine the final user experience. It is (e.g.) how we access a database or other resources, such as XML.

It would be another story, if we e.g. were doing 3D math animations, where C++ would be an obvious choice.