今日推荐英文原文：《Why Defensive Programming is the Best Way for Robust Coding》
今日推荐英文原文：《Why Defensive Programming is the Best Way for Robust Coding》作者：Ravi Shankar Rajan
Why Defensive Programming is the Best Way for Robust CodingDefensive programming is when a programmer anticipates problems and writes code to deal with them.
It is like seeing a car crash in the future….. and remaining calm because you have insured for it.
That said, the whole point of defensive programming is guarding against errors you don’t expect. A defensive programmer is on the watch for trouble, to avoid it before it can cause real problems. The idea is not to write code that never fails. That is a utopian dream. The idea to make the code fail beautifully in case of any unexpected issue. Fail beautifully can mean any one of the following. · Fail early: your code should ensure all important operations are terminated in advance especially if those are computationally expensive or might irreversibly affect data. · Fail safe: where there is a failure, your code should ensure that it relinquishes all locks and does not acquire any new ones, not write files, and so on. · Fail clearly: when something is broken, it should return a very clear error message and description which can enable the support team to resolve the error. Ok. You might argue here.
There are no problems in the present. My code is working beautifully. Why should I invest time and effort in a “future anticipated “problem? After all, we have been repeatedly taught to use “You Ain’t Gonna Need It” (YAGNI). And you are a professional programmer and not a hobbyist who can keep adding to the code at will.
The key here is pragmatism.
Andrew Hunt in his book Pragmatic Programmer describes defensive programming as “Pragmatic Paranoia”.
Protect your code from other people’s mistakes and your own mistakes. If in doubt, validate. Check for data consistency and integrity. You can’t test for every error, so use assertions and exception handlers for things that “can’t happen”.
Healthy Paranoid Programming is the right kind of programming. But paranoia can be taken too far. The key is striking the right balance.
And here are some ways to do defensive programming.
Ask Yourself: If this Fails?Every line of code does something so the 1st point of defense is asking yourself if the code fails, then what.
For example, consider the following non-compliant code.
Here we can ask the following questions.
CASE SY-INDEX. // Noncompliant; missing WHEN OTHERS clause
- What happens if sy-index is not 1.
- What happens if sy-index is not 2.
Simple. Isn’t it?
WHEN OTHERS. // Compliant
WRITE ‘Unexpected result’
It is this “what if” thinking that separates good programmers from those that write code and hope it never fails. “Never” always comes sooner than expected, and by then the code is buried in a long-forgotten part of the program, with the error messages giving no indication of where the problem is and how to resolve it.
The beauty is of this defensive programming technique is that it costs almost no time to add exhaustive type checking to your code. You are not “over coding”. You are just “securing” your code.
Check the Boundary Conditions Carefully.The very first check is to ascertain if you need a boundary condition as after all as loops are expensive.
Boundary (or edge) conditions are where all the action happens. Loop from 0 to 100 and loop values 1 through 98 are pretty much the same (barring conditionals in the code of course). But loop 0 is where the code enters the loop, and initialization conditions are set up (and possibly set up wrong). Likewise, the last loop is where things leave, and whatever the loop was doing to values, stops.
A loop with at most one iteration is equivalent to the use of an IF statement to conditionally execute one piece of code. No developer should expect to find such usage of a loop statement. If the initial intention of the author was really to conditionally execute one piece of code, an IF statement should be used in place.
Consider the following non-compliant and compliant code. We don’t need a loop at all in this case. A simple IF will do.
Always remember debugging loops always involves most of the effort at the start and the end, making sure what goes in and what comes out is correct. So once you are clear with the boundary conditions, nothing else can really go wrong with your code.
Noncompliant Code Example
DATA remainder TYPE i.
DO 20 TIMES.
remainder = sy-index MOD 2.
EXIT. “ noncompliant, loop only executes once. We can use IF
Compliant Code Example
DATA remainder TYPE i.
DO 20 TIMES.
remainder = sy-index MOD 2.
Use TDD (Test Driven Development)The fundamental idea of TDD is “first write unit tests, then write the code, then refactor, then repeat.”
Unit tests are automated tests that check whether functions work as expected. Your very first unit test should fail since it’s written before you even have any codebase.
You add a bit to the test case code. You add a bit to the production code. The two code streams grow simultaneously into complementary components. The tests fit the production code like an antibody fits an antigen.
The problem with testing code is that you have to isolate that code. It is often difficult to test a function if that function calls other functions. To write that test you’ve got to figure out some way to decouple the function from all the others. In other words, the need to test first forces you to think about good design.
This creates a better, decoupled design in which you have better control over things as the code develops.
While writing test cases upfront might consume time initially but this brings a lot of benefits. Developers admit that previously they used to write lines of code, realize that their solutions were irrelevant, and then start coding again from scratch.
Unlike outdated coding practices, TDD allows developers to go back to the drawing board and concentrate on designing a lightweight, flexible architecture upfront.
And the very fact of writing test cases upfront prevents any bugs that might pop up later thus saving time, effort and heartburn.
Always Write Optimized Code.Some programs (and programmers) like resources a lot. But whenever you can, use the minimum. And to use the minimum your code should be as optimized as possible.
Usually, one sure shot way to optimize is to turn on whatever optimizations the compiler provides inbuilt.
Compiler optimizations usually improve runtime from a few percents to a factor of 2. Sometimes it may also slow the product so just measure carefully before taking the final call. Modern compilers however do sufficiently well in this regard as they obviate much of the need for small scale changes by programmers.
Besides the standard compiler optimizations, there are several other tuning techniques that can be used.
Collect common subexpressions.If an expensive computation occurs in multiple places, it is better to compute in one place and remember the result. Don’t put such computations within a loop unless required.
Replace expensive operations by Cheap ones.String manipulation is probably one of the most common operations in any program. However, it can be an expensive operation if done incorrectly. Similarly, in some cases, you can improve performance by replacing multiplication with a series of shift operations. Even where this is effective (and it isn’t always) it produces very confusing code. So take the decision considering the readability of code also.
Eliminate Loops.Loops are mostly overheads. Try to avoid loops wherever possible if iterations are not much.
Cache frequently used values.Caching takes advantage of locality, the tendency of programs and people to reuse recently used data. Caching just the most used character or data significantly improves the performance of the program.
Rewrite in a lower level language.This should be the last resort. Lower level languages tend to be more efficient, although more time consuming from the programmer’s point of view. Occasionally we get significant improvements by rewriting crucial code in lower level languages but this comes at the cost of reduced portability and maintenance becomes very hard. So take the decision carefully.
Remember in optimization, selection is perhaps 90% of the game. It’s worth taking the time to decide what you’re doing and to do it right. Of course: That’s also where the black magic lies!
And Lastly, Trust no one.“There are known knowns; there are things we know we know,” Donald Rumsfeld, the Secretary of Defense during the second Bush administration, once said at a press conference. “We also know there are known unknowns; that is to say we know there are some things we do not know. But there are also unknown unknowns — the ones we don’t know we don’t know.”
Rumsfeld was talking about the war in Iraq, but the same holds true for data also. In a nutshell, this means to verify all data that you do not have complete control over.
Obviously, user data is always suspect. Users can very well misunderstand what you think is crystal clear. Try and anticipate issues, and verify or otherwise tidy up everything that comes in.
Program settings data is also prone to error. INI files used to be a common way of saving program settings. Because they were a text file, many people got in the habit of editing them manually with a text editor, and possibly (likely) screwing up the values. Registry data, database files — someone can and will tweak them someday, so it pays to verify even those things.
In short, the data coming in must be clean if you have any hope of your code doing what it is meant to do. If you’ve ever heard the phrase “Garbage in, Garbage Out” this is where it came from.
As Edward Demming has rightly said.
“In God we trust. All others must bring data.”