Don't Tell Me Someone Failed Web Programming 101 Again
In this article a financial services company attacked someone who pointed out an obvious flaw in their url scheme by calling the police and then threatening to bill him for the fix.
What was the problem? Apparently they put raw database id's into the url to identify the customer, so one could just change the number and see other people's account info. Basically the dumbest web programming error you can do. Instead of thanking him they basically blamed him for the problem, as if his discovery of the bug caused the bug to appear. I'm not sure what to call this, maybe a Reverse Heisenbug? Instead of the bug changing when you look at it, the bug appeared when you looked at it.
I can understand fully how such a major stupidity makes it into a sensitive web application, the mentality of management is usually the major culprit, along with having people unfamiliar with web programming building the system. These are sadly common in financial services (and healthcare) companies, at least from my experiences in both.
I once worked a short contract at a local state University (not exactly a paragon of quality coding either). The manager of this group had formerly been a cashier who lucked into this management position (not uncommon when there is a lot of turnover) and was highly protective of the superiority of the developers she managed. The first thing I noticed was that the single sign on functionality passed the name and password via the url to the next application (thus allowing you to capture anyone's password with the back arrow) but I didn't say anything yet as being the outsider made the manager suspicious of anything I said.
As I finished the contract and had a little time (I wrote an iCal clone as a web app) I looked the group's major product. This was an app the each department in the University had to use to verify and document what they spent their state funds on. If this information was not
available the state would withhold all the money (basically most of the school's budget). The URLs seems odd to me and I looked at the source. Basically the same bug as above, database id's were being used verbatim in URLs; not only that but all operations were GET including the delete operation.
So I showed them how easy it was to sit there with a browser, change ID's in the urls and slowly but surely delete the entire database! With a little cleverness I could have built a little command line client to do it for me. This finally got the manager's attention (the iCal app was proving to be popular) and I spent the last day fixing the app to use hashed id's (I expect they changed the delete functionality later).
At the financial services company I've written lot of posts on, a slightly different and more interesting bug happened when the "new" customer account application was rolled out. After a few hours of being live reports started coming in of people seeing other people's account data (investments,balances,etc) mixed into theirs. The application had worked fine on the QA servers so this was a shock to management which immediately killed the system but not before maybe 1000 people had seen this.
We used Java (no that's not the bad thing!) and Weblogic as the application server with a DB2 backend and served from Microsoft IIS web server (yes that is a strange combination).
Apparently some consultants months earlier had written some of the system but were long gone at this point; they had decided to store state using Stateless Session EJBs to cache data for the currently logged in user. This is of course a complete oxymoron as SSBeans are shared among all the sessions; you can never count on getting the same one again. The reason it appeared to work is that the reference to the bean was saved in the user's session and (at the time anyway) was a simple value. Retrieving the value in a subsequent connection would return the same bean; under development testing conditions with a large pool of beans it would seem to be fine. The company was too cheap to mirror production with the QA servers; in production it was a cluster and in QA it was a single app server.
So once the application went live the references stateless beans would both point to reused beans in the same server, as well as the same reference number on the other clustered server, so there were two ways to get mixed up. Depending on the load you might even trip this multiple times and "collect" data from multiple customers, each of which would now be stored in the database. It took months to untangle the mess.
The Java programming team was blamed for the problem although the error was caused by contractors who had not been managed and were never required to document anything. The project manager (who's fault it was) got a promotion I think.
Sure almost anything would have fixed the problem, having proper QA hardware, load testing, not hiring a random consulting firm and trusting everything they did, code reviews, project managers who actually understood programming (if I remember correctly the PM had previously worked in restaurants), management who understood the importance of proper QA, the list goes on.
Sadly these incidents are very common and are unlikely to ever end.