Application Security Testing: An Integral Part of DevOps
Writing software is no more a disciplined engineering science than is divination by Ouija board. It's not that software developers are not smart. It is also not that some of the ways that people think about parts of software development as somewhat scientific and based on engineering principles' it's just that most of what we do is based on subjective opinion. Unfortunately, most of the opinions are wrong!
Ivar Jacobson explained part of the problem at TechEd 2007 in Orlando: "Everyone buys books, but nobody reads them." All of this non-reading that goes on means that even as some people in our industry mature intellectually and scientifically, those ideas are not shared within the industry as a whole. Sometimes, it's so bad that when I refer to a supposedly recognized industry expert, people literally scoff. Why? Because it's not the industry expert that they recognize. The subjectivity is elevated to my expert against your expert. (This is the geek equivalent of my dad can beat up your dad.)
One Size Process Does Not Fit All
Does anyone still believe that RUP, or Agile, or XP all the time is the right way to go? I hope not. There seem to be few very good practitioners of any of these processes and RUP is so complex I haven't ever seen anyone do it well.
The truth is that successful software projects often come down to heroic effort from one or more people who just know how to craft working software from bits of code. If you truly are one of these people, demand a raise, often. You are underpaid. If one of these people work for you, expect to pay them more. People who can build software are worth their weight in gold. Many people are muddling through and it shows in some of the software that is foisted on us.
All of the things that make up a process are individually just tools. Any one of us should be free to pick and choose any process for any project. Period. Spending time writing process documents, writing how-to guides, arguing about coding standards, dogmatic adherence to ten minute stand up meetings, or pair programming all the time is hooey.
Ultimately, it will come down to these things: 1) Is there enough time and money to finish and 2) Do the people working on the project actually know how to complete the software? Everything else is window dressing. This does not mean the process you use is hooey. It actually means if you do something, if your team has a process, and it works, you know how to build software. Ask for a raise.
True Waterfalls Never Work for Software Planning
The basic idea of a waterfall schedule is that we schedule planning, analysis, design, coding, testing, and delivery of software for the most part in consecutive chunks. Once one part is done, we never return to it. Waterfall schedules are the worst model for software planning in the world. Here is why.
Building software is one of those things that is different every time. We are building a brand new something, often with a brand new team, and generally with brand new technologies and tools, and we only have one shot to get it right. This means the customer doesn't know what he wants until he sees what you have, and then he knows that that is not it.
During planning, we are overly optimistic about how long things take. I have never seen a project manager who was overly pessimistic about schedule. During analysis, everyone misses things. Figuring out all of the scenarios up front is really hard, and sometimes people are wrong. It follows then that the design is wrong too. The design is wrong because the requirements are wrong. Then, there is the flawed design. When people design, sometimes they design crap. Finally, writing code is very labor intensive, many people are neophytes, and code reflects how people think. Many people's thinking is convoluted and consequently so is their code. Lastly, testing is often done poorly, if at all, and deployment isn't planned.
The worst part of all of this is many projects have no real planning, analysis, design, or testing, and deployment is not planned. It comes down to one or two guys pulling all nighters. (On huge projects it comes down to a handful of guys (sorry gals, I don't see that many of you) pulling all nighters.
The plan is smoke and mirrors and the waterfall plan is more so. What we need to do is gather detailed requirements and design and build from there. However, we also need a feedback loop where we can go back to the customer/user/expert and test out our theories on analysis and design. We must revisit things often to get them right and that's why waterfall is wrong. This is not a license to hack. Scope must be carefully planned and agreed upon; if the budget is significant in your organization, agreement must be in writing. What we are doing by stating that strict adherence to a waterfal schedule is wrong is acknowledging that we need to give people time to clarify and refine their common understanding.
Waterfall looks great to bosses. The schedule reads: Done with planning on x-date, analysis on y-date, design, coding, testing, and delivery on such and such a date. I empathize with this rational desire to have an end date. Repeating parts sounds like there is no end in sight. What I am talking about is understanding and flexibility not an infinitely long schedule: we'll be done when we are done. The truth in that logic is self-evident for when else would we possibly be done, except when we are done.
Schedules need a timeline and projects have to end. Defining scope is important. Getting detailed requirements is crucial and sometimes acknowledging that things have to change is okay too. With a clearly articulated scope and clear requirements, half the battle is won. The rest is do the people implementing know how to deliver software?
Building the Easy Stuff First is a Huge Mistake
Everybody loves a lay-up. A lay-up in basketball is easier than shooting from distance and not quite as hard as dunking the ball. That's why so many projects start with the easy stuff, the lay-ups. GUIs, menus, about boxes, clever things in web pages, stuff we already know how to write. These are lay-ups. And, doing this stuff first is a huge mistake.
The problems are never found in the easy parts of analysis, design, or implementation. The problems are in the high-value, high-complexity elements—like a ten thousand pound elephant—that everybody knows is in the room but no one acknowledges. What should be tackled first are the things that everyone is worried about, the things that make the system truly valuable.
Tackling the hard stuff—am I mixing my metaphors now—first helps work out the details of the implementation in all its dark, scary crevices. Doing the easy stuff first sometimes leads the programming into dead ends and then the hard stuff breaks the design. If enough easy stuff is done first, sometimes a complete re-write is necessary just to get the hard stuff in at all.
To figure out what to analyze and build first, ask these questions: Is it valuable to the client, is it technically complex, does the customer even want it? If you get this part right, you'll be home free.
GUIS DO NOT CONSTITUTE HARD STUFF! Usually!
The GUI thing is a problem. GUIs are what non-nerds see. It's what they need to see. The problem is that tools such as Visual Studio make GUI building easy and when non-nerds see GUIs they begin to think we are done, but in reality the hard stuff is lurking around the corner. It is hard to manage the customer's need to see progress and yours to make real progress, but if software were easy, boneheads would be doing it. (And, typically boneheads are not writing software.)
OOP and UML Don't Solve All of Our Problems
Objects and UML are responsible for screwing up a lot of projects. Objected-oriented anything is not a guarantee of anything. We have to know how many objects to create, what objects to create, and this is hard.
OO and UML are a bit like predicting the weather, very, very hard. But if we don't try, a hurricane, tornado, flood, or Santa Anna winds might kill a lot of people.