Keeping things simple

Archive for the ‘Programing’ Category

Estimating with planning poker

leave a comment »

Planning with poker, you say? That sounds as reasonable as planning a construction project by drawing plans on napkins. Unreasonable as it may sound planning poker works, but not as you would think. Planning in planning poker is just a by-product of the result of the poker planning session, and that is accurate estimates, not precise but accurate.

Planning Poker, is a consensus-based technique for estimating, mostly used to estimate effort or relative size of user stories in software development

Planning poker will provide you with estimates from the development team and from that estimates the project manager can with some dose of certainty put the estimate in the context of a project plan. Accurate estimates with which the development team is comfortable empower the project manager to create a executable project plan and to take required action to see out the user stories on schedule.

Common situation on any project is the clients demands delivery of “x” user stories on the fixed date “y”. With accurate estimations of user stories which can be depended on by the project manager, the project manager can outright see weather the required user stories can be delivered in the available sprints. With that information the project manager can negotiate with the client on reducing the scope, can add new team members, outright reject the clients request or do any of the actions in the project managers arsenal.

What’s the advantage of using planning poker over other estimations, you ask?

So how does the estimation go?

The estimation meeting should happen on a regular schedule, but you should adjust your meetings according to your needs. User stories should be estimated when they appear and re-estimated as new findings and information are known. The project manager should nominate the user stories for estimation by the team in the estimation meeting.

Estimating is done with card decks that have different values usually based on the Fibonacci sequence, the most common being 0, ½, 1, 2, 3, 5, 8, 13, 20, 40, 100, ? (unsure). This sequence reflects the inherent uncertainty in estimating larger items this ensuring that the estimation precision is correlated to the estimation value. This is very important because if you estimate one user story at 50.5 and an other at 47.65 that is very precise and probably wrong and very irrelevant to you needs. More important what is the quantifiable difference between 50.5 and 47.65 based on a 5 sentence description of the user story?

The meeting it self goes on in a few basic steps

  1. Product owner provides a short overview of the user story
  2. The team ask questions and discuss to clarify assumptions and risks
  3. Product owner records the summary of discussion to improve the user story definition
  4. Each individual lays a card face down representing their estimate
  5. The team turns card simultaneously
  6. People with high estimates and low estimates are given a soap box to offer their justification for their estimate.
  7. Discussion continues from step 2 to step 6 until a consensus is reached

The procedure continues until all of the user stories are estimated and the team is happy with the estimations. Then the product owners can re-prioritize the backlog and with them the project manager can re-adjust the overall project plan.

The estimation meeting must adhere to some rules

  1. During discussion, numbers and other forms of size (long, short, much, less, etc.) must not be mentioned at all in relation to feature size
  2. An egg timer is used to ensure that discussion is structured
  3. The developer who was likely to own the deliverable has a large portion of the “consensus vote”
  4. No more than 10 people are involved
  5. Moderator, product owner or project manager are not allowed to estimate.
  6. Units used vary – they can be days duration, ideal days or story points, but they must be announced by the moderator on the start of the meeting
  7. Each estimator is given one deck of the cards
  8. All decks have identical sets of cards in the
  9. Moderator can negotiate a consensus

Rules 1 and 2 are the basis of success of an estimation meeting, team members must not anchor to someones estimates and you wouldn’t want the discussion on user story to go on for hours, would you? The egg timer forces the team to re-cast their estimations based on the current state of the discussion to see how close is the consensus. If the consensus can’t be reached after a couple of iterations than the moderator can and should suggest to postpone the estimation of the user story for revisit on some other meeting or if that is not possible and an estimation must be made than the moderator should take the vote of the majority.

The planning poker estimates might not be the most precise estimates but the delivered estimates are dependable and a good basis for building an iteration plan to discus with your client.

Written by Luka Ferlež

February 4, 2013 at 22:29

How to configure MS DTC through a firewall

leave a comment »

If you are using TransactionScope for handling your transactions then the ambient transactions you open with the TransactionScope command will be inlisted in the Distributed transaction coordinator (DTC). For the transaction to run the DTC (Microsoft or otherwise) must exist on all machines that will participate in the transaction and the participants must be able to talk to each other through the firewall.

The DTC requires several things to be able to comunicate:

  • being able to resolve names by DNS or NetBios (from both sides)
  • being able to communicate trough port 135 (RPC Endpoint Mapper port for handshake)
  • being able to dynamicly assign at least one port for communication (by default in the 1024 – 65535 range)
  • RPC & DTC must exist on all particiants

First ensure that all of the participants can resolve their respective names by using the ping command, if they can’t then add entreies to LMHOSTS file to secesfully resolve names.

The comunication of the participants begins with the originating DTC negotiating with the destitation DTC trough port 135 authentication and on which port the actual exchange of transaction data will occur. The port will randomly be assinged in the 1024-65535 range. That means that the firewall must allow all of thoes ports in both directions for the DTC to work proprely which of course blows the heads off of the IT security guys.

Lucky for the IT security guys you can control the port range for dynamic assigment trough a few registry values. The settings are controled by the Ports, PortsInternetAvailable, UseInternetPorts values under the “HKEY_LOCAL_MACHINE\Software\Microsoft\Rpc” path. When limiting the port range Microsoft recomends to use ports 5000 and above and assing a minimum of 20 ports. However you should assing a minimum of 100 ports since the RPC will be used for other process as well as for the DTC.

So for example to limit the port range to ports 20000-21000 do the following:

  • Add key Name:”Internet”
  • Add value Name:”Ports”, Type:”REG_MULTI_SZ” (Multistring), Data:”20000-21000″
  • Add value Name:”PortsInternetAvailable”, Type:”REG_SZ” (String), Data:”Y”
  • Add value Name:”UseInternetPorts”, Type:”REG_SZ” (String), Data:”Y”

Then all you have to do is adjust your firewall setting to match the one above.

  • Allow inbound & outbound to port 135
  • Allow inbound & outbound to port range 20000-21000

That’s it you should be able to run your distributed transactions and if you need more detailed instructions you can find them in KB 250367

Written by Luka Ferlež

August 23, 2012 at 13:03

Posted in Programing

Tagged with , ,

Add routes on VPN connect with Powershell and Task Scheduler

with 12 comments

At my company we use a Microsoft ISA server for our firewall/VPN server. To be able to access the servers at my company via VPN required me to do one of two things:

  1. Use default gateway of the remote network
  2. Add static routes each time I connect via VPN

Option number one has the drawback that in such a scenario all my traffic would be directed through the VPN connection. Since my company has a very restrictive security policy which allows me to access only a couple of servers (TFS, SQL server, …). That effectively means that when connected to the VPN I can not use the internet or any other network resource.

Option number two requires that you add each of the routes to the routing table every time you connect the VPN. This can not be done via a batch script since the IP of the gateway changes on each connect.

So I thought to my self that there should be a better way to do this. With some basic Googling I quickly came up with an elegant solution. The first step towards the solution was a piece found on this blog. The blog discribes the very same problem that I was facing and provides a simple Powershell script that handles the routes. This Powershell script although it does what is need efficiently didn’t completely satisfy me.

Why? Well simply because I lazy and don’t like having to click on a power shell script on the desktop every time I make a VPN connection.

Doing some more Googling brought me to a Technet page that described how to use the Windows Task Scheduler to trigger the Powershell script execution on each VPN connect. Modifying the snippet I created this command which is executed in the Powershell command prompt.

schtasks /create /F /TN "VPN Connection Update" /TR "Powershell.exe -NonInteractive -command C:\vpn.ps1" /SC ONEVENT /EC Application /MO " *[System[(Level=4 or Level=0) and (EventID=20225)]] and *[EventData[Data='VPN NAME']]"

In this command change the path to your script, and change the last part where it says VPN NAME to the name of your VPN connection. This will ensure that the Task scheduler executes your script only and only when you connect that specific VPN connection. The complete explanation of the settings in the command can be found on before mentioned Technet page.

So I was almost there with the solution but the script it self did not satisfy me because I had to add multiple route and ensure that the routes are not already existing. I modified the original script to this version.

# ---------------------------------------------------------------------------------------
# Initial version:
# by Chris @ 30.9.2008
# ---------------------------------------------------------------------------------------
# Modified version:
# by Luka Ferlež @ 6.10.2010
# ---------------------------------------------------------------------------------------
# Add IP routes across a VPN via a DHCP assigned IP address
# Configuration
# ---------------------------------------------------------------------------------------
# Route IP address
$ips = @("", "", "")
# VPN connection IP
$vpnIP = "192.168.90."
# ---------------------------------------------------------------------------------------
# Get the IP address of the VPN connection
$vpnip = ipconfig | findstr $vpnIP
# ---------------------------------------------------------------------------------------
# If we don't have an IP address on the VPN, error and quit
if (!$vpnip) {
"You do not have an IP address on the VPN"
# Trim any leading/trailing whitespace
$vpnip = $vpnip.Trim()
# ---------------------------------------------------------------------------------------
# Split the contents of $vpnip in to an array
$vpnip = $vpnip.Split(" ")
# ---------------------------------------------------------------------------------------
# Find out the depth of our IP address in the array
$bit = $vpnip.Length - 1
# ---------------------------------------------------------------------------------------
# Get out just our IP address on the VPN
$vpnip = $vpnip[$bit]
# ---------------------------------------------------------------------------------------
# Delete routes if existing
foreach($ip in $ips) {
$hasRoute = route print | findstr $ip
if($hasRoute) {
"Deleting route " + $ip
route delete $ip
# ---------------------------------------------------------------------------------------
# Add whatever routes we need
foreach($ip in $ips) {
"Adding route " + $ip
route add $ip MASK $vpnip

This script allows you to simply add the necessary routes at the top of the script, and the script will process them, enjoy.

Written by Luka Ferlež

October 6, 2010 at 22:21

SQL 2008 Change Data Capture – The basics

with 4 comments

Continuing on the previous article on SQL 2008 Data Audit – so wittingly called The beginning now we explore the basics of Change Data Capture in SQL server 2008. The first thing we need to do is to enable change data tracking for our database. Unfortunately there is no UI to do this so you have to do it with T-SQL command.

Enabling on database level

USE Test;
EXECUTE sys.sp_cdc_enable_db;

This line of code will enable your database for change data capture. You enable and disable change data capture with two system stored procedures sys.sp_cdc_enable_db and sys.sp_cdc_disable_db. The changes made to your database you can check by running the next command.

SELECT name, is_cdc_enabled
FROM sys.databases


You can see in the picture above that the Test database has been enabled for change data capture. Right now you database has not been changed by SQL server the stored procedure has just updated the database property is_cdc_enabled in the sys.databases to true.

Enabling on table level

USE Test;
EXECUTE sys.sp_cdc_enable_table
    @source_schema = N'dbo'
    , @source_name = N'Test'
    , @role_name = NULL

This line of code will enable your table for change data capture which you can check by running the next command. You enable and disable change data capture for tables with two system stored procedures sys.sp_cdc_enable_table and sys.sp_cdc_disable_table. When you enable the first table in the database for change data capture SQL server will create several tables, functions and jobs for your database. The changes made to your database should be as so.

SELECT name, is_tracked_by_cdc
FROM sys.tables
tables functions

Here highlighted in red is the table that will actually capture your data, other tables in this picture are supporting tables for data capture and are created only once per database when you enable your first table for change data capture. Check more in detail about tables, stored procedures and functions.

Change data capture table

The data stored in data capture tables consists of some change data capture columns and your table columns.


The columns displayed here are from the change data capture table for our dbo.Test table. We have columns __$start_lsn and __$end_lsn which mark the first log sequence number and the last log sequence number under which the changes were recorded in the transaction log, then there are __$seqval which is the order of the row change in a transaction, the __$operation signals the type of the operation which was executed and last is the __$update_mask is a bit mask showing which columns have been updated in the row change. These columns appear in any change data capture, after that come your table columns with their respective data type. So let’s make a change to our table and see what happens.

Recording change data capture

(Col1, Col2)
VALUES(500, 600) 

(Col1, Col2)
VALUES(501, 601)

So let’s what has been recorded in our change data capture table cdc.dbo_Test_CT

FROM cdc.dbo_Test_CT


Now let’s perform a few more operations and see what we get.

UPDATE dbo.Test
SET Col1 = 503

UPDATE dbo.Test
SET Col1 = 504

DELETE dbo.Test


Not what you have expected we have executed in total 5 DML statements but the change data capture table contains 7 operations, how come? Let’s make an another query which will make things clearer.

Reading change capture data

    CASE CT.__$operation
        WHEN 1 THEN 'Delete'
        WHEN 2 THEN 'Insert'
        WHEN 3 THEN 'Update - Pre'
        WHEN 4 THEN 'Update - Post'
    END AS Operation,
    cdc.dbo_Test_CT AS CT INNER JOIN
    cdc.lsn_time_mapping AS LSN ON CT.__$start_lsn = LSN.start_lsn


What we have done here is joined the change data capture table for our Test table with the cdc.lsn_time_mapping table which stores information about the log sequence number and time values and allowing us to tie time with the specific log sequence numbers. One more important thing we have here is the tran_id field which we will use in the next article. Now we can se why we have 7 records instead of 5 records which we expected to have. Change data capture when capturing data from the UPDATE statement will create two records one with __$operation value of 3 and one with value of 4. The record with value 3 represents the data that was in the row which was there before the update has been committed and the record with the value 4 represents the data after commit.

One more thing you can see here is the times when the transactions where started and completed. This is very useful when trying to get changes in a specific time frame which can be done by using change data capture functions provided by SQL server.

Reading interval change capture data

The first thing we want to do is to find out which log sequence numbers are related with our time frame we want to look at.

    sys.fn_cdc_map_time_to_lsn ('smallest greater than or equal', '2009-06-26 15:37:46.577')
    ,sys.fn_cdc_map_time_to_lsn ('largest less than or equal', GETDATE())

Here we use the sys.fn_cdc_map_time_to_lsn function to retrieve the log sequence numbers for the start time of our interval (smallest where time is grater than or equal to ‘2009-06-26 15:37:46.577’) and the end log sequence number of our interval (largest where time is less than or equal to GETDATE()). Next up we have to use those log sequence numbers to get something useful from the table we can do it by using an another function cdc.fn_cdc_get_all_changes and cdc.fn_cdc_get_net_changes, the difference between the two will be apparent immediately.

FROM cdc.fn_cdc_get_all_changes_dbo_Test (
    , 0x0000006A000001F70001
    , 'all') 

FROM cdc.fn_cdc_get_net_changes_dbo_Test (
    , 0x0000006A000001F70001
    , 'all')

This is the result:



The cdc.fn_cdc_get_all_changes function returned all the changes that happened in our chosen interval giving us 4 total recorded changes. The second query using cdc.fn_cdc_get_net_changes returned only 2 records in the same interval why so? Well simple all_changes function returns as it names says all changes that fall between our lsn boundaries, but the net_changes returns only the last changes that happened in the interval, but what does that mean? As you can see we have only one record by each row and that is this the last operation executed on that row in the given interval.

One more thing to notice is that when executing this functions records with __$operation = 3 will not appear in the result set although it does fall into our selected interval. Since this functions will only return changes the __$operation = 3 (which is the delete operation of the update columns) will not appear.

Reading changed columns

Useful thing to now is which column has changed in an operation. The thing to use here is an another scalar valued function provided by change data capture sys.fn_cdc_has_column_changed, this function will return 1 or 0 depending if the column has been changed.

    , sys.fn_cdc_has_column_changed ('dbo_Test', 'ID', __$update_mask) AS is_id_changed
    , sys.fn_cdc_has_column_changed ('dbo_Test', 'Col1', __$update_mask) AS is_col1_changed
    , sys.fn_cdc_has_column_changed ('dbo_Test', 'Col2', __$update_mask) AS id_col2_changed
    cdc.fn_cdc_get_all_changes_dbo_Test (
        , 0x0000006A000001F70001
        , 'all')


This way we can see which columns have been changed in each operation and perform actions required according to our business rules. Thing to note here is that when you perform the insert and delete operation (__$operation = 2 || 1) the changed column signaled is all columns because you have either created values for all columns or deleted values for all columns. When looking at the changes for update (__$operation = 4) then you see only the actual updated column signaled.

This concludes this session, till next time….

Written by Luka Ferlež

June 24, 2009 at 16:00

Posted in SQL

Tagged with , ,

SQL 2008 Change Data Capture – The beginning

with 2 comments

Data auditing is a requirement of most enterprise systems and was a pain to implement with SQL server up to now.When data audit was required in pre SQL server 2008 databases you had to relay on solutions like triggers or some custom change tracking on the application layer. With the dawn of SQL server edition 2008 Microsoft has provided with change data capture mechanism which is an integral part of the SQL server 2008 data engine. The change data capture is a powerful mechanism for gathering changes made to data on the SQL server platform and provides it’ functionality with little overhead.

How this works

Well in short simple. Implementation of change data capture is based on an already existing element of the data engine the transaction log and does not require you to alter your db, schema or tables. When you enabled change data capture on a db SQL server will create a new schema called “cdc” with several tables, stored procedures and functions. With the changes made to your db the enabling process also creates two jobs in you SQL agent for your db one for capture and one for cleanup. These two jobs are the base for the asynchronous performance for the change data capture. The capture job is executed by default every 5 seconds, it reads the transaction log for changes from the last job execution and stores them in the change data capture tables. The cleanup job works by default once a day and cleans the stored data in the data capture tables, meaning that you have to gather that data and transfer it somewhere for permanent storage before the cleanup job runs.


Microsoft published on MSDN a SQL Server Best Practices Article on “Tuning the Performance of Change Data Capture in SQL Server 2008” going in length on why, how and where to tune the performance of change data capture. The general conclusion of this article and many articles on the web is that the Change Data Capture has very little overhead and can be used comfortably on OLTP database with no fear of grinding your database server to a stand still. You will see a slight decrease in performance through the increased disk writes from the change data capture job but the decrease in performance if the capture job is properly configured should not be more than 5% by my test. You will see an increase in the disk queue length for the log file but only a slight for the data disk.


There are of course problems with the SQL change data tracking. The three main problems that I have faced in working with change data tracking are storing contextual information, gathering change data in one place, schema altering.

Contextual information

Storing contextual information about the change is a pain with change data capture. In most cases when performing audit you want to store some contextual data about the current operation which is changing the data like the userid, data, reason of change (form, command, etc.) …. Since the change data capture relies on an asynchronous read of the transaction log, in the time of the read SQL server does not have any information about the context under which the change has occurred and therefore that data can not be gathered.

Gathering change data in one place

The change data capture as is designed and implemented (as I can figure it) is intended to capture data for temporary storage before you transfer it to data warehouse for permanent storage. The data warehouse is intended to have the same schema (no primary or foreign keys) like the OLTP database and you simply copy the rows in the data warehouse. There is no built in way as it was not intended for such use to gather the data all in one place for review.

Schema altering

This brings us to the last problem of altering the schema. If you alter the schema of your OLTP db, you have to change the schema for your data warehouse db. That is no problem when adding columns but what when you delete the columns, and how to then display the data for the user? There is one more thing I think is just pure laziness that Microsoft did not implement. The problem with schema changing is that when you change the schema the change data capture continues to track the same columns (as well as there data type) as before with no input to you that you should do something to start capturing the new columns. It would suffice that when changing the table under change data capture you would get an error that you can not do that until you drop the change data capture for that table, but it seams that was to much for Microsoft.

There are workaround for all of this problems and I will show them in the next article on change data tracking.


SQL server 2008 change data tracking is a well welcome addition to SQL server feature list which helps us to create audits for your databases easily with little overhead. With some missing features which have simple workarounds this feature can be used very effectively for auditing your database.

Check back for more articles on change data tracking with the details of implementation and workarounds need to make the whole thing functional.

Written by Luka Ferlež

June 20, 2009 at 18:34

Software engineering is bad engineering

leave a comment »

This a common phrase that is used in reference to software in general. I have heard it from my friends, customers and even co-workers all of which are often ignorant of the complexity and the amount of different factors that are involved in the execution of a software application. I’m especially amassed when my programing colleagues criticize some software with no objective reason and fully knowing that their own software is as equally flawed. The very same question was raised on my new favorite programing Q&A site stackoverflow.

Bugs in software is something that we have to be aware will happen as long as there is software. Now most people that are not familiar with programing always ask why software can’t be bug free? That’s when I ask them if they have experienced something of the following:

  • Oil leak
  • Engine trouble
  • Leaky faucet
  • Wobbly chair
  • Squeaky doors
  • VCR eating the tape

Off course the answer is yes, we have all experienced some of this if not all and this are the equivalent of software bugs. Oil is not meant to leak from your car – bug, chairs are not meant to be wobbly – bug. In short my answer is no we can’t have bug free software because nothing around us is bug free even not ourselves we can only try to keep bugs down to the minimum.

You are not meant to be sick but you still are is that a bug in you immune system or was it caused by something else out of your body’s control? Strange question you might think but what it comes down to is that our body starts to malfunction because something that is out of the body’s control has influenced it and disrupted the normal way of functioning same apply’s to software.

Software developers constantly combat the external influences on their applications be it human input, operating environment, incompatible software or whatever other external source of influence. This all invites so many variables and possibilities in an already highly complex system that is being developed by humans and is thus by default faulty. When developing software you have to take into account different hardware and software combinations on which your software is supposed to run, users that are supposed to use it and when you factor all of that in you get massive number of possibilities. Now as I see it most applications today are quite stable for what & where they are meant to run.

Software engineers are trying to combat all of those possibilities and the bugs that arise from such a number of different scenarios and so far we have seen with each new release of software that they are managing to decrease the number of bugs rapidly and are producing really quality code.

Anybody remember Windows 95? 🙂

Written by Luka Ferlež

December 14, 2008 at 1:03

Posted in Programing

Tagged with

Using TransactionScope for handling transactions

with 5 comments

Implementing transactions from C#/VB code today might seam easier than ever before, but the path to easier and effective transaction handling has many traps on the way. Microsoft made the life of developers seemingly easier by implementing the TransactionScope class in the .net framework 2.0. I say seemingly easier because you sometimes might find your self baffled by the way your code will behave.

The TransactionScope class uses MS Distributed Transaction Coordinator to manage transactions, so before you start using the TransactionScope, make sure you have DTC running on the Web and Database server and that you have granted enough security permission for the DTC to be able to communicate on thoes machines.

So to get started with TransactionScope, here is a simple code example that seams perfectly OK.

using(TransactionScope outerScope = new TransactionScope())
    // Execute query 1

    using(TransactionScope innerScope = new TransactionScope())
            // Execute query 2
        catch (Exception)



Now if query 2 which is inside of the try/catch block, rasies an error you will catch the exception in the try/catch block and handle it, but here is a supprise the application will throw an ObjectDisposedException on line 15 when you try to complete the transaction. This is because the DTC has already caught the exception and although you have handled it, the TransactionScope objects where already disposed by the .Net code and the transaction has been rolled back. Notice that I have said “objects” this is because both of the TransactionScope object have been disposed as they are a part of the same transaction. This is just one example where you might seam surprised that the code is not behaving as you would expect.

The TransactionScope implements something that is referred to as an “Ambient transaction”. This is the transaction in which all database queries are executed. When I say all queries I mean all queries that are executed between the creation of the TransactionScope object and the calling of the Complete() method, including all method calls and their method calls and so on, that are executed between those two lines. That means that you don’t need to handle the transaction in every method that participates in the transaction unless you want to.

You can control the manner in which a method participates in a transaction quite easily. Here is a piece of code to demonstrate:

using(TransactionScope outerScope = new TransactionScope())
    // Execute query 1

    using(TransactionScope innerScope =
          new TransactionScope(TransactionScopeOption.Required))
        // Execute query 2

    using (TransactionScope innerScope =
           new TransactionScope(TransactionScopeOption.RequiresNew))
        // Execute query 3

    using (TransactionScope innerScope =
           new TransactionScope(TransactionScopeOption.Suppress))
        // Execute query 4


This demonstrates the use of the TransactionScopeOption Enumeration to control the way some piece of code will participate in a transaction. The enumeration consist of three options Required, RequiresNew and Suppress. The default options for the TransactionScope constructor is “Required” which states that an ambient transaction is required for this operations, if non exists the transaction will be created. The options “Suppress” states thata the this TransactionScope will not participate in the transaction and therefore all database queries are executed without a transaction. The option of “RequiresNew” states that a new ambient transaction will allways be created and become the root transaction.

There are further options that can be modified when using the TransactionScope class by using a different constructor TransactionScope Constructor (TransactionScopeOption, TransactionOptions) that takes the TransactionOptions struct as a parameter here is an code example.

TransactionOptions options = new TransactionOptions();
options.Timeout = TimeSpan.FromMinutes(1);
options.IsolationLevel = IsolationLevel.Serializable;

using(TransactionScope scope =
      new TransactionScope(TransactionScopeOption.Required, options))

The TransactionOptions struct has 2 properties Timeout and IsolationLevel. The first is self explanatory the timeout prior to completition of the transaction. The second one is more important, I would say my self crucial to effective handling of transactions. The IsolationLevel Enumeration consist of several levels of isolation: Serializable, RepeatableRead, ReadCommitted, ReadUncommitted, Snapshot, Chaos and Unspecified. With the “Serializable” isolation level being the strictest and “ReadUncommitted” begin the least strict form of isolation.

Serializable means that all transactions occur in a completely isolated fashion; i.e., as if all transactions in the system had executed serially, one after the other. That means no other transaction can be run until the one running is completed. This type of isolation should be used very carefully since it can cause massive timeouts etc. if used with long lasting transactions.

RepeatableRead is a step down from Serializable isolation level, where multiple transaction can run at the same time, but all data read during the transaction can not be modified (exclusive row/page locks) and volatile data cannot be read.

ReadCommitted is a further step down in isolation where the transaction allows for reading and modifying of data that has been used in a select statement, but volatile data cannot be read or changed by other queries.

ReadUncommitted is the least strict isolation level where the transactions can see uncommitted changes performed by other transactions, allowing for dirty reads.

More on isolation levels on Wikipedia.

That sums up the basic use of the TransactionScope class with more to come 🙂

Written by Luka Ferlež

August 5, 2008 at 17:04

Putting C# into SQL

leave a comment »

In my latest project we have come across a need to get results from a web service inside SQL Server. First basic reason why we wanted to have such an integration of web services is so that we could use them transparently in Microsoft Reporting Services. That of course is not the only reason we also wanted to use the same results as an input to some of our stored procedures and to do that we need to be able to get the results from the web service through a stored procedure or a table valued function.

The process it self is quite simple if you know where to begin. The beginning is in Sql servers implementation of CLR, with the use of CLR you can easily show those DBA’s how real programing is done :). Unfortunately for the DBA’s this is the only way this can be done so they need to trust us “developers” :). Seriously this is not such a problem to implement.

Pop open up Visual Studio and create a new “Sql Server Project” in the language of your preference (VB or C#) it’s all the same, but I will use C#. To start us off we need to configure the project first, so open “Project Properties” go to the “Build” section and find the option “Generate serialization assembly” set the option from auto to on. You have to do this because Sql server clr does not allow dynamic xml serialization, I guess it a security thing. Now when you build your project you will end up with two dlls one for you code “project.dll” and the other one for the serialized xml “project.serialization.dll”.

Now add your web service to the project just like in any other project trough the “Add web reference” menu, after you have done sou let’s add a new item to the project, add a stored procedure item to the project. Here is some sample code for the stored procedure.

public static void WebServiceTest()
	// Initialize web service, adjust it for your service
	using (WebService ws = new WebService())
		// Create a template record for the result set
		SqlDataRecord record = new SqlDataRecord(
			// The strings are the names of the columns of the result set
			// first field of type int
			metaData[0] = new SqlMetaData("field1", SqlDbType.Int),
			// second field of type varchar, equivalent of varchar(max)
			metaData[2] = new SqlMetaData("field2", SqlDbType.VarChar, -1),
			// third field of type decimal equivalent of decimal(12,4)
			metaData[4] = new SqlMetaData("field3", SqlDbType.Decimal, 12, 4));

		// Mark the begining of the result-set.

		// Send rows back to the client.
		// Get the results from the web service, query a method on your web service
		foreach (Result item in ws.GetResult())
			// set the value for field1
			record.SetInt32("field1", item.Value1);
			// set the value for field2
			record.SetString("field2", item.Value2);
			// set the value for field3
			record.SetDecimal("field3", item.Value3);

			// Send the row back to the client.

		// Mark the end of the result-set.

Let’s look at this code. On line 4 is the initialization of the web service you will have to adjust it to your web service. Then you have to initialize a template record for your result set as is done on line 8 trough line 15. We initialize there columns of your result set column one is named “field1” and has a type of int, column two is named “field2” and has a type of varchar with length -1 or varchar(max), column three is named “field3” and has a type of decimal with length 12 and precision 4 or decimal(12,4). On line 18 we start the return of our result set and tell sql that we will be returning items based on record template we have just created. On lines 22 through 33 we iterate through our result set, set the values and use the Sql pipe to send the results. On line 36 we end the sending of results. That is it for the code part, so now we need to register the assembly in the sql server.First you have to set the database trustworthy option to on.


If we do not set this option to on we can’t register our assemblies as UNSAFE, and if we cant do that Sql clr wont allow us to connect to the web service. I think that you could use the EXTERNAL_ACCESS option but I have not tried that.

CREATE PROCEDURE [SchemaName].[WebServiceTest]
EXTERNAL NAME [AssemblyName].Namespace.WebServiceTest

That is it now you can test out your new sql clr procedure.

Written by Luka Ferlež

March 22, 2008 at 23:39

Posted in C#, Programing, SQL

Tagged with , ,

Connecting to TFS

leave a comment »

Like so many things in the programing world, establishing a connection to a TF server can be done in a couple of different ways. I have previously specified everything that you need to start developing an application that will interface with TF server, so I will skip the introduction to TFS development and get my hands dirty right away.

From you code, C# code that is, you can establish a connection to TF server in two basic ways. Both of two create an instance of a TeamFoundationServer class. One is trough a service class TeamFoundationServerFactory and the other is creating an instance of a TeamFoundationServer class manually through a public constructor.

The main difference between the two is that the TeamFoundationServerFactory static class will cache the connected servers, so that every time you request a server, a request is not being sent to the TF server, thus not clogging the server with requests.

TeamFoundationServer server = TeamFoundationServerFactory.GetServer("http://" + serverName + ":" + serverPort);

Now you have your instance of a TeamFoundationServer. Each time you call the GetServer(String) method it will check if the server has been already accessed, and if it was the method will return a cached instance otherwise it will create a new one.You other option is to create an instance of a TeamFoundationServer on your own.

TeamFoundationServer server = new TeamFoundationServer("http://" + serverName + ":" + serverPort);

If you it this way a request will be sent to the TF server to check if it exists, if you do that every time you need something from the server, you will generate a lot of request to the server.Basically if you use the TeamFoundationServerFactory class the framework will handle the caching of the server for you, if you create an instance on your own you will have to store that instance and keep track of it your self.


The TeamFoundationServer constructor way has one benefit over the GetServer(String) method. The benefit is seen when you get around to authenticating the users. When you want to provide user authentication through the GetServer method, you will use the overload that has looks like GetServer(String, ICredentialsProvider)
using the UICredentialsProvider class like so:

TeamFoundationServer server = TeamFoundationServerFactory.GetServer("http://" + serverName + ":" + serverPort, new UICredentialsProvider());


What this will do, it will create a pop up requesting the user credentials every time the user needs to be re-authenticated. The EnsureAuthenticated() method will first check if the user has already been authenticated, and if not it will create a login pop up for user credentials then authenticating the user.In your software you might want to save the user credentials to use them any time the user needs to authenticate. That’s when TeamFoundationServer Constructor (String, ICredentials, ICredentialsProvider) constructor comes into play. With this constructor you can specify in addition to a UICredentialsProvider object that is used as a fall back if the authentication fails, separate NetworkCredential object to be used for authentication.

NetworkCredential networkCredentials = new NetworkCredential(

TeamFoundationServer server = new TeamFoundationServer(
   "http://" + serverName + ":" + serverPort,
   new UICredentialsProvider());


This way the code will try to authenticate first with the selected NetworkCredentials and if that fails it will use the UICredentialsProvider to create the pop up and ask the user for credentials. This way you can save and reuse the users credentials when you want to, and if the authentication fails the framework will ask the user for credentials on its own.


I was having trouble connection to my company’s TF server, since our workstations are in one domain and the TF server is in a separate domain, with an ISA and a Web filtering server in between. I was getting the HTTP error code 407 “proxy authentication failed”, until I found a way to deal with it I lost around 4 hours. The solution is quite simple you just add this to to your config file.

 <defaultProxy enabled="true" useDefaultCredentials="true">
   <add address="serverName" />

When you put this in the application will by pass the proxy for the specified network addresses.

Written by Luka Ferlež

March 18, 2008 at 21:35

Web 2.0

leave a comment »

Been doing some programing lately, since I did none will I was on vacation. I made a couple of years ago a web site for my fathers company, and it has been working quite well, but i had to make some changes to it.

So I started to redo the css and so on, came up to the point of redoing some JavaScript.

Now I was never a big fan of scripting languages, and did do my share of programing in JavaScript, PHP, ASP (VBScript) and so on, and it was never to my liking. The only language I liked using was PHP because there I could use OOP, and that was a big thing for me since I don’t like messy code 🙂

But I abandoned PHP because there were some things that on my opinion make large scale programing difficult. Why? you ask, simply one big down side is the lack of an top notch debugger that can handle and work in all scenarios the same (tried Zend but did not like, I liked the work done with Eclipse). The one thing I could never forgive PHP is the way it handles different types of variables, and that there is no checking of the variables. For example if in certain function I create a variable named eclipse, and than try to reference it somewhere I make a typo and write eclpse = 1 then in that function I will have both variables and will be surprised to find out that my code does not work. Of course I am glad to see PHP developing futher, and so on as I believe PHP is ideal for sites that require some server work done, but not too much :).

Now since we live in the world of expanding Web 2.0 I would expect that the programing languages that are used in presenting data and the web are also evolving, and in this languages I don’t mean the ones that do their work on the server like PHP or VBScript, rather the ones that do their work on the client like JavaScript, CSS, HTML, XHTML and so on I on purpose don’t put XML in any of these groups since I don’t believe that XML is a programing language (nor is HTML but what the hell).

If you look at the parts of code that are executed on the client you will see that all of them are evolving. CSS started with 1.0 now is 2.1, and the 3.0 is under development, the same with HTML and XHTML, now at version 4.01 and 1.0 but with high emphasis on development and version 5.0 and 2.0 are in the works.

What language did not change much since it was introduced in 1995, yes it did change I know, and it is use is highly dependent on DOM, but the thing I dislike most is the lack of OOP. I know that you can create you own objects and so on, but I don’t like the way that is accomplished, which I think is unnecessarily complicated. Again I dislike that the language is not strong typed, and the hole thing with variable types is symptomatic to the scripting languages.

Today with the expanding world of Web 2.0, and exciting new possibilities like AJAX which is the corner stone of Web 2.0, we as programmers would like to give web the look and feel of the desktop, I don’t believe that that will ever be possible, since the client essentially operates in offline mode connecting to the server jut to get the content, and believe that JavaScript and the DOM will be the bottleneck of Web 2.0 development.

Written by Luka Ferlež

March 18, 2008 at 21:29

Posted in Programing

Tagged with