Archive for April, 2008

Sacrifice

Apr 17

So a few of the passionate conversations here at the MVP summit have been around sacrificing testing when a deadline looms. Apparently this is a common practice amongst MVPs which I think a lot of us were quite surprised at.

D’arcy, Rod, and myself were talking about this at lunch so I thought I would add our thoughts to the mix:

There are a few things you can do when under a looming deadline:

Add Resources
By adding resources to a project you can get more done in more time…. in theory. Granted there is only so much a new person can do. They have to be interviewed, trained, and not be a drag on the team. Adding more resources could actually cause you to not meet your deadline unless a lot of foresight and careful consideration is taken into place.

Cut Features
Its much easier to hit a deadline when there is nothing to do. This is usually one of the best options to give your clients I/we feel. A client is usually happy to delay a small / lower priority item in order to get the higher priority items on time. Then follow up with a small release with the one feature or simply move it to the next cycle/release.

One “feature” that lots of people cut is security. Security is an easy thing for people to think of dropping as when you remove security from the application, the application still works and has all the functions the user wants. In my experience when security becomes a second class citizen it never quite comes back to the same level it would be.

Cut Quality
This is the debated one. If you just throw testing out the window you are more than likely going to have a buggy product and on the next cycle will spend lots of time fixing/managing/coordinating bugs which make that cycle take longer (so we should cut more quality out so we can spend time fixing issues from the last time we cut quality). This is usually quite a short sighted view in our opinion.

Push The Deadline
This is often the one that is not thought about. Why not delay for a week or two to get this out? What we are doing usually does not have peoples lives depending on it (even if it did I am sure we would want it to be right rather than right on time). The thing here is to manage expectations. Tell your consumers as early as possible that there might be a delay and that we can either be late or remove features (or remove quality apparently although I/we don’t agree that is acceptable usually).

Wording
A few funny sayings/wordings came out of our chat:
“Drop testing? Sure we can drop quality. Not a problem.”
“I want a car tomorrow. I know it will catch on fire half way down the street but we can patch that later right?”
“If our coders don’t produce quality without tests then we will just replace the coders until we get better ones. They are in infinite supply after all”

Filed Under: Uncategorized

WCF Message Streaming

Apr 14

In my previous post WCF And Large Messages. I mentioned there was a better way to send large data. As I have been getting a lot of traffic on this topic here is the improved methodology:

One of the really sweet features of WCF is to allow the streaming of messages between client and server. By default messages are buffered and once completely built they are sent.

While this works great for small messages once you start sending large amounts of data (in my case a 50-70Mb file) it really pays off. For my case sending data as a large message took an average of 23.3 seconds in the standard buffer and burst mode method described here. Doing this via streaming only took an average of 4 seconds using the streaming method.

Streaming is only supported under the basicHttpBinding, netTcpBinding, and netNamedPipeBinding bindings. If you are hosting your service in IIS6 your only option is to use basicHTTPBinding (or create your own binding but that is outside the scope of this post). If you are hosting your IIS7 then you will be able to use the TCP, named pipes, and the msmq bindings as well.

To enable streaming was surprisingly simple. All I had to do was create a new binding configuration:

  <basicHttpBinding>
        <binding name="StreamingFileTransferServicesBinding"
                 transferMode="StreamedRequest"
                 maxBufferSize="65536"
                 maxReceivedMessageSize="204003200"  />
  </basicHttpBinding>

And then set my service to use that binding configuration:

      <service behaviorConfiguration="MyBehaviour" name="MyStreamingService">
        <endpoint address=""                   binding="basicHttpBinding"                   bindingConfiguration="StreamingFileTransferServicesBinding"                   contract="IMyStreamingService" />
        <endpoint address="mex"                   binding="mexHttpBinding"                   contract="IMetadataExchange" />
      </service>

To dissect this a bit I have setup a buffer size and a maxMessageReceive size which controls how much data is buffered before it is sent and how big those messages can be. To be honest I have not played with these settings very much yet so you will probably want to tweak these to your own situations.

Also in the binding configuration there are several different streaming types we can setup:

Streamed – Both in and out messages are streamed
StreamedRequest – Messages sent from client to server are streamed
StreamedRespone – Only messages returned from the server to the client are streamed
Buffered – This is the default of buffering all data and sending it in one burst

A BIG thing to note is that when using streams the only allowed data types are Message, Stream, or an IXMLSerializable implementation for ALL methods in your service! If we use “Streamed” as our transfer mode then we would need to have BOTH our input parameters and our return value be one of these types. If you just want to send data and return back some small data object or primitive then use StreamedRequest or StreamedResponse.

Onwards to code!

 

Function ProcessFile(ByVal data As Stream) As DataContracts.ValidatedAuthority

As you can see here my interface is pretty simple. It takes in a stream of data and returns a simple object that shows how the file processing went.

As I mentioned before that because we are using streaming that all methods must take only Streams, Message, or IXMLSerializeable as parameters. If you want to have methods that do not require this then create a new service that does not use the streaming behaviour.

Now if you are hosting in IIS you will still need to let the HTTPRuntime that you are sending large data with: <httpRuntime maxRequestLength=73400 executionTimeout=“100 /> (Or whatever settings you think are appropriate).

A little housekeeping note is that you will need to dispose the stream on both the client and server. This is because there are actually two streams in two different app domains so both client and server will need to treat them as such.

Also for completeness here is my entire service model config section:

<system.serviceModel>
    <services>
      <!--Streaming Service-->      <service behaviorConfiguration="MyBehavious" name="MyStreamingService">
        <endpoint address=""                  binding="basicHttpBinding"                  bindingConfiguration="StreamingFileTransferServicesBinding"                  contract="IMyStreamingService" />
        <endpoint address="mex"                  binding="mexHttpBinding"                  contract="IMetadataExchange" />
      </service>
    </services>
    <behaviors>             <serviceBehaviors>                  <behavior name="MyBehavior">
              <serviceMetadata httpGetEnabled="true" />
              <serviceDebug includeExceptionDetailInFaults="true" />
          </behavior>              </serviceBehaviors>        </behaviors>        <bindings>
      <basicHttpBinding>
        <binding name="StreamingFileTransferServicesBinding"
                 transferMode="StreamedRequest"
                 maxBufferSize="65536"
                 maxReceivedMessageSize="204003200"                 />
      </basicHttpBinding>        </bindings>  </system.serviceModel>
Filed Under: WCF

Loading MS SQL Database With CSV Data

Apr 14

I recently had to load a lot of comma separated data into a file never knew how easy it was to load CSV data into a table. Here is the t-sql:

BULK
INSERT Address
FROM ‘c:\address.csv’
WITH
    (
    FIELDTERMINATOR = ‘,’,
    ROWTERMINATOR = ‘\n’,
    FIRSTROW = 2
    )

So handy to have this feature. Simply just point the from to the delimited data on disk and set your delimiters. In my case I am using a comma for the field separator but you could use any character (use \t for tab).

One other common thing is that the first row in your CSV file is the header information (as is the case in my example). If you want to specify to ignore the first row and start reading data from the second row use FIRSTROW=2 (as shown) to skip the header record.

Filed Under: Sql

Advanced Salt/Hash Generation Techniques

Apr 9

Hopefully I have driven home the facts that salts are an important part of keeping a hash secure. We have done this using the strong random number generator of RNGCryptoServiceProvider. Now just because we applied a salt does not mean that our hash is rock solid. The hash and salt usually sit next to each other in the database/file they are stored in like so:

UserId UserName Hash Salt Creation
100 John eiqkluw9vbj3qw4io4hytrweg35 asdt234l;jt62lj652q346 6/7/2002
101 Sam 34723lkdsgoqep78t31jgto326 w46kjlqaklrejh3234l6j3 2/5/2004

If I were to steal this database I know that you have salted your passwords and a rainbow table type of attack will not be easy (as I would need a massive rainbow table to crack a hash). Instead I would be looking at a brute force attack using the known salt by using this method:

dim computedHash as string =  ComputeHash(randomWord & “asdt234l;jt62lj652q346″)
if computedHash = “eiqkluw9vbj3qw4io4hytrweg35″ then 
      alert(“Johns Password Is: ” & randomWord)
end if

Unique Salt Placement
In the last example I made the assumption that the salt was added to the end of the word to create the hash. There is no reason that as a developer I could use any one of these methods to create a hash:

String hash = ComputeHash(password + salt)
String hash = ComputeHash(salt + password)
String hash = ComputeHash(salt + password+ salt)
String hash = ComputeHash(pass + salt + word)
… etc.

All of the above examples will result in a different hash of course but by joining the salt and the password in a unique way adds some security.

Salt In Another Table
One technique is to separate the salt and hash from being in the same table. This might fool an attacker if they don’t know anything about databases or just don’t look around to see if their is salt data related to the password record. To me additional programming effort is not worth it to add this trivial defence in (but every bit can help)

Salt Not Obviously Stored
A technique I like is to not store a salt in a salt column. Instead I generate it off some fixed data that is associated with the user. In the above table we have a creation date. This date will never change so to generate a salt we can take a hash of the creation date!

dim Salt as string = ComputeHash(user.creationDate)
dim hash as string = (user.password + Salt)

So if we were to use this technique we would no longer have a salt column as it is now generated based on the users creation date. One thing to watch out for in this technique is to ensure the data is non-changing. If we used the users phone number to generate our salts and the user changed their phone number they could no longer log in (try explaining that one).

For this technique you could also use a GUID that you may have for a user. I would shy away from using an auto incrementing ID as that would be fairly trivial to predict.

Add Fixed Salt Data
One thing that can also be done is adding in a hardcoded bit of random data in addition to a per user salt like so:

String hash = ComputeHash(password + salt + “DRASFH%$!CJ^R##$^ADFH”)

If an attacker were to steal the database they would know that the password is salted but would have a hard time brute forcing the password as they are missing the hard coded application data that was injected into the hash.

Conclusion
All of the techniques I have shown are about obfuscating the techniques used to generate salts and hashes. If an attacker were to gain access to more than just the data (i.e. source code or even the binaries) they could determine the technique used and start cracking.

By adding another layer though we have reduced our attack surface. Our attacker now needs not only database/file access but access to the code (either source or binaries). Our attacker also needs to have more knowledge. They now need to know about cracking hashes, database systems (if we are using a database as storage), how to gain access to the code, and how to decompile/read code.

Just remember to balance the time/complexity of your hash generation technique with your security requirements. You could easily spend a lot of time making a super solid mechanism for salt/hash generation but is that really worth it when all you are protecting is a users list of favourite movies?

Filed Under: Security

Attacking Hashed Passwords

Apr 8

The best way to defend yourself is to know how to attack yourself so here are some of the scenarios a password system may be attacked with

Brute forcing the application

Here an attacker is attacking the website/winform by throwing a dictionary/random strings at it. In this case the application still loads the salt from the database and does all the work to verify the password. So salting is transparent to the attacker and has zero influence on this type of attack. The best mitigation techniques are account lockout policies (3 failed logins and the account gets locked), forcing strong passwords, and password change policies.

Stolen Database + Brute Force

Here an attacker has stolen the database and is trying to hack the hashes directly. In this case the account lockout policy will not factor in as that is controlled by the application which we are no longer going through. Strong passwords help here as they are harder to brute force. Also password change can really kick in as if it takes 45 days to crack the targeted hash then by the time I have cracked it the user has changed their password on the live database. The biggest advantage here is salting. Now instead of having to crack a string of ‘MyPass’ the string is actually ‘MyPass#$^q∩gwjoεai←cyuw3b5asdφ♂’ as it includes the hash.

This attack actually breaks down into two potential techniques. The first is to generate random strings and check it against every hash in the database until we get a match. When we do get a match it will be something like ‘MyPass#$^q∩gwjoεai←cyuw3b5asdφ♂’ so we can easily tell which part is the password and which part is the hash. We then go to the production system and enter in ‘MyPass’ and the system performs the login steps and we are logged in.

The second technique is usually used when going after a specific hash instead of all of the hashes. In this attack we take our dictionary/random string + the salt we already know from the database and run it through the hashing algorithm. This is usually a lot faster as people use weak passwords and we already know the salt portion of the string. So Instead of having to guess  ‘MyPass#$^q∩gwjoεai←cyuw3b5asdφ♂”‘ as we did with the previous method we already know that part of the string will be ‘#$^q∩gwjoεai←cyuw3b5asdφ♂’ so we just brute force ‘random’ + ‘#$^q∩gwjoεai←cyuw3b5asdφ♂’.

So by salting we have done two very important things. We have made strings to crack very long and added in random and non alpha numeric characters making the hash way harder to crack so instead of taking minutes/days/hours to crack a simple password it is now weeks/months to crack it. We have also made it incredibly hard to generate one hash and check it against all passwords. I will illustrate that with an example:

Table Without Salting:

Uid Hash
1 AS24673SAGA
2 JASdf890246
3 5734ASDFga89
4 #&&asdgu3
5 %$#sadfFH%

Because there are no salts in here the hashes are all just the result of hashalgorithm.CreateHash(‘plaintextword’). So if we want to brute force all we do is hashalgorithm.CreateHash(‘randomword’) and then take the output of that and check it against ALL rows of the database to see if we get a match. So by generating one hash we can check 5 hashes to see if we get a match.

Table With Salting:

Uid Hash Salt
1 SGJGHD6w34* @$ASDG456
2 GSJS66r657 ASDY^#$QM
3 ^*DFHhfsdfh ADtjws&
4 JKFD^4uwsry AH#$#$^&Y#
5 &$DN#TMEB ^$*#GJ%

Here all the hashes were created with hashalgorithm.CreateHash(‘plaintextword’ + ‘salt’). If we want to brute force now we either have to do hashalgorithm.CreateHash(‘randomword’ + ‘randomGuessAtSalt’) and check that against the whole database (i.e. totally ignoring we have a salt column here). The second attack is to take hashalgorithm.CreateHash(‘randomWord’ + ‘knownSaltFromDatabase’) but now we can only check against one row at a time instead of all of them.

Really Bad Math

The last time I did math on permutations and calculations was about 10 years ago. Please email me if you find any inconsistencies in my math and I will correct it.

We will say for our sample that we are using an alpha numeric password (so we 62 characters) choose from and we will say a password is 6-10 characters long

The maximum number of computations to get a 6 character hash then is: 62^6 =                   56,800,235,584 computations
The maximum number of computations to get a 10 character hash then is: 62^10 = 839,299,365,868,340,224 computations

For a table without salting we get the bonus of calculating one hash and checking it against all the records in our table so I came up with this formula:

(Computations * TimePerComputation) / Records in database = Time Required to crack the first hash

Assuming it takes 0.0000306 seconds to generate one check and we are dealing with a six character password:

Time to crack ONE un-salted password

# of Records Time Required (seconds) Time Required
(Hours)
1 17,380,807

4,828.00

10

1,738,081

482.80

100

173,808

48.28

1000

17,381

4.8

 

Time to crack ONE salted password (without factoring in the added string length/complexity of the salt itself)

# of Records Time Required (seconds)
1 17,380,807
10

17,380,807

100

17,380,807

1000

17,380,807

So now if we add a 128 bit salt which would add 3.4028236692093846346337460743177e+38 permutations of complexity ( I derived this via 2^128 i.e. a bit can be 0 or one and that is repeated 128 times)

64^6 + 3.4028236692093846346337460743177e+38  = 3.4028236692093846346337460750049e+38 calculations

3.4028236692093846346337460750049e+38 *  0.0000306 seconds/calculation = 1.0412640427780716981979262989515e+34 seconds

1.0412640427780716981979262989515e+34 seconds = 3.29963712 × 10^26 years

The Rainbow Table

This is a relatively new attack and quite ingenious and as hard drive sizes and processor speeds have increased it has become quite practical. The basic idea is this:

1. Generate all the permutations of passwords and hash them.

2. Store the hashes in a file along with the word used to create them.

3. When cracking a hash just search the database and out comes the word used to create that hash

So now we have a lot of initial time invested to create the table and very little to look it up now. I have used this technique to crack my own passwords (which met the windows complexity rules too) in 10 minutes. Password change policies would not have helped me there.

But don’t think the sky is falling and passwords are dead (well they are but that is a whole other series of posts on that subject). By having a long password (or passphrase) makes this harder to perform as now you need a table computed for password of 6-100 characters instead of 6-10 characters. This would result in a table of well over 100GB and would be even slower to search. Salting also adds to the length and complexity of a password (because a good salt will be long and have non alpha-numeric characters in it) which makes for a huge and slow rainbow table.

Conclusion

These method is used by many operating systems and software applications to store passwords. I would not recommend another way to do it besides a hash. Hashing on its own helps and adding a salt helps slow down every attack form. The fact that we have to store salts is the big downside here in that if an attacker got both the hash and a salt it could reduce the time to break a hash (yes I will have a post about securing hashes coming soon). The advantages of hashed passwords far outweigh the disadvantages in my mind.

Filed Under: Security

MVP in Security

Apr 1

Although the timing is funny this is not an April Fools joke. I was awarded a MVP award for Security today.

Another local Tom also got an MVP for C#

Filed Under: General

I Am Spartacus

Apr 1

image Okay, I have to come clean. I have to admit to this as it’s been twinkling in my feeble brain for some time now. Now with the shutdown of the CLI_DEV (formerly the altnetconf mailing list) I may as well spill it.

I am not Steve Ballmer. I am not Richard M. Stallman. I am not Scott Bellware.

I am altnet pursefight.

There. I said it. It’s out. Let the chips fall where they may.

How did this sad tale of hidden identity come to pass?

The heathens of the original altnetconf mailing list were just bickering and acting too much like children. So I opted to do the only thing that made sense, be the red-headed step child and tell the world like it was. Unbiased, unadulterated, and unaltered. From the trenches with bullets of bullshit firing left, right, and centre. Natterings of nonsense all around me. With me in the middle, sifting through the mess and bring up the nuggets of crap to the top of the cess pool.

First I thought I would register altnetpursefight.com and post from there, but then that wouldn’t work as I’m a cheapskate and wasn’t willing to foot the privacy bill to hide my name. Next I thought about setting up a blog somewhere using SharePoint (the CKS:EBE was just released and would have done the job) but that was too much effort for a venture that may have blown up in my face.

So I turned to Blogspot.

There I was able to post anonymously and distill the word amongst the cretins known as the ALT.NET readers.

I also setup a Twitter account to respond to the backlash, which I knew would come post haste. I even posted a message on my own blog trying to throw off would be hunters for the elusive identity of the one with the knowledge, but I’m not sure how successful that was.

Then I sat back and waited.

And waited.

And waited.

And finally… the first post! Which got my own little thread in the mailing list wondering who I was. And other threads followed suit but there was much fodder for my posts. There were thoughtful posts of course, but then there was the crap. The banana infested baby Vista like poo-poo that really made my blogspot site shine. The heresy, the name calling, the knock-down-drag-out cat fights that ensued.

My favorites were of course my own name calling (like comparing Scott Bellware to a raving shopping cart bag lady) or The “Law of Two Feet”, a farce made up by people who don’t want to engage you on an intellectual level, so they make up passive-agressive, bullshit granola ideas like the Law of Two Feet to shield themselves from an actual, fundamental critique of their ideas (whew, that was a real mouthful).

Anywho, there it is. You now know the obvious truth that has plagued the ALT.NET community for months now. The secret it out and I am it. Next up, the identity of Mini-Microsoft!

Filed Under: Uncategorized