Welcome to queirozf.com
If you spend most of your time coding Information Systems and if you develop software within an MVC framework/mindset, the most stable, clean and reliable parts of your application are probably your database tables.
We usually build our applications (that is, add complexity) around these datatypes. We model our domain classes after the database (this is very likely the case if you're using a design pattern like Active Record), and we think and reason about our domain over a cup of coffee with our database diagram in our hands.
We can't escape complexity. We can manage it and control it and
understand it, but we can't escape it because we are modeling complex
behaviour and processes.
I think that I should make something clear: there are two types of complexity:
- There's inherent complexity in all software systems which we cannot avoid and, without which, our application is basically worthless. Only complex problems are worth solving. Software systems need to be complex in order to justify their existence. You can't avoid this.
- There's accidental complexity too, which is complexity that doesn't add any extra value to your users. Under this category we can put things like bloated code, duplicated behaviour, over-generalized classes, large methods and so on. This is complex code that will make your software harder to change and extend, and won't you get you any extra money. These are things which can (and should) be understood and tackled.
Where Complexity Builds up in Information Systems
- It is just too easy to add a method to a model class. This may well be the most serious form of complexity creeping into your system, as it is also the most difficult to change because its changes will affect the whole application. If this becomes too unwieldy to maintain and extend, you're probably looking at a major overhaul ($$ and time) that will probably affect and/or break your controller too (probably not your views though).
- It is not so bad to add many actions (that is, separate routes that each render a separate view) to a controller; this is the way it's supposed to be.
- We generally don't pay as much attention to these as to the main code in our application (models mainly), as they are seen as some sort of second-class citizens. Their main objective is just to support the application and extract code that is not specific to our domain from our app.
- Needless to say. Not many of us take part in projects where the cleanliness and organization of testing code becomes a problem.
Ways to Mitigate it
- When you first need to do something with a model (for example in a controller in order to select some data to display) somewhere, write that code there(i.e. where it's actually used). If it's more than, say, 20 lines or so or if it is needed by other parts of your application, extract to code to a helper method. Only if the helper method becomes too large and overgeneralized should you move this code to the actual model class.
W.I.P.: This is a work in progress.
Types are very important and permeate most work we do when programming, and also how we reason about our programs and systems. I've tried to list a few trade-offs and comparisons between them to help us choose which one suits our needs the best.
I'm thinking of these definitions as follows:
Static Typing means that every variable has just one type, and methods have strict type signatures.
Dynamic Typing (also called duck-typing) means that it doesn't matter so much of what type a variable is, as long as it does what you want. (duck-typing comes from the famous duck test.
- More work at the beginning (hardly any programs work compile the first time you code them), but compile type errors mean system behaviour will be more predictable at runtime.
- Easier to get a system working from the get-go (no compile-time errors or type errors), harder to keep complexity at bay and avoid runtime errors as the system grows.
- Type and compile-time errors are easily caught but that doesn't prevent you from having to test funcionalities, i.e. not how well the implementation fits the requirements, but how well the system fits actual user expectations.
- Requires more testing than static typing. The reason is clear: much fewer potential errors are caught at compile time.
Behaviour at runtime
- Tends to be more reliable and stable. If you run your program a couple of times and it works, it's likely that it is correct.
- Tends to be less reliable and prone to errors. You may have a system that runs well for weeks and breaks when some new unforeseen condition is met: an object whose attributes have not been correctly set (for example, due to faulty user-interaction) will cause a runtime error;
W.I.P. This is a work in progress.
Software is one of the most complex things humans can set out to do.
Software is complex because it's invisible. It's because it's not tangible.
You can't possibly "show" 500,000 lines of code to someone and tell him/her "see here? This is what my program does" the same way you can (if you were a regular civil engineer, for instance) show a friend a building and say: "This is what my building is like".
There's not much ambiguity in the physical world. You show someone a door (it might even be a somewhat more advanced door or something like that), you can be sure you and him/her are looking at the same thing. There's no risk of you pointing at a straight edge and him/her "seeing" a round edge.
Not so with software. The layers of abstraction are so deep that it's much more difficult to share your understanding of a codebase with someone else. Chances are that things will get misunderstood in the process.
But that's not really where testing comes in. This is not where the real complexity lies, in my opinion.
Perfection is expected from software. Perhaps because you can't negotiate with it (unless you dive into the code in order to hack it) on a physical level.
If you live in a building whose door is too small, you can perhaps hire someone who, with a little work, increase the size of the door with a few hammer strokes or something like that. In other words, the physical world is negotiable with the tools you have.
If you have a piece of software that doesn't do what you want, you're
basically screwed. You can't adjust it for your own needs the same way
you can a door that's too small.
However, since business objectives are always changing, users are always wanting other features from software. This isn't bad at all, mind you, it's something we accept when we become software developers.
The natural consequence of this is that software needs to be constantly changed and updated. Not just adding more features, but combining existing features into new ones, removing now useless features, adding newer levels of abstraction and so on. This is what kills bad software in the long run. Complexity becomes unmanageable and nobody knows what side is up anymore.
The fact is that we can't avoid changing software, because we live in a changing world. Software supports all other industries in the world. Every last sector in an economy needs us to make their process more effective, lower costs and so on.
So we can't get rid of changing software. So what do we do? We embrace
it. And take steps to enhance the process.
The solution is testing. A lot of test cases. Unit tests, functional tests, integration tests, user interface tests, you name it. Yes it's a lot of work, but the alternative is code that does not survive more than a couple of years of changing requirements.
Having an all-around testing framework under our software is the only way we can proceed to change it to adapt to new expectations (remember, we just can't avoid it) and be sure that the system as a whole still works.
Removing unused code is just as important as adding more features if you're to keep your project to a manageable size. Without tests, you'll just never perform the necessary structural changes that newer requirements make unavoidable. You just won't do anything that may cause your existing system to break down. And you'll have no idea what sort of functionality may break when you change your code unless you can run tests to verify that.
Having a large number of test cases that you can run automatically after each change is the only way software can evolve with time and not turn into a frankenstein.
extracted from a conversation in an Internet forum
If I didn't agree to [capitalism] in principle, I'd agree to it because of its practical effects: the easiest way to rise up in a free market society is to be productive and produce value for others, who will pay you very well if you can create value for them.
In addition, if you can come up with novel ways to do so more effectively than everyone else, you will make even more money, because other people will be paying less to have access to the particular good/service you provide. Therefore, a system is in place where what's individually best for each one (if one's interested in living a life with access to more services and goods) is to be useful for others, and everyone profits.
This is just beautiful if you ask me, and this is what's been responsible for the whole world today (even the poorest) having better living standards than a noble Englishman in the 17th century. Now that guy was a 1%er if there ever was one! He would have better (absolute) living standards if he were on the lowest 1% in England today!
The whole world got richer and nobody got poorer. Wealth was created. It didn't get redistributed. It got created. Thanks to the 1% geniuses and very productive people who do the breakthroughs in science and manufacturing.
The idea that there's a fixed amount of wealth in the world is the most easy-to-fall-for idea in the history of mankind.
We, the 99%'ers (yes, I'm nowhere near the top 1%, or even the top 10% actually, in the country where I live), think we are at a loss but, were it not for the 1%ers of the world, the very richest among us would still be dying in their 30es due to diseases nobody has found the cure for or paying a days' worth of work for a plateful of food, because nobody had figured out very efficient ways to produce food that make it so cheap.
Mind you, that engineer only worked so hard to discover new ways to produce food in order to make a larger profit for himself. He would be among those who are filthy rich today and get insulted for being a 1%er.
Thank god (for lack of a better term) for selfishness and individuality because it has enabled people to hack their minds at the deepest possible levels to discover stuff that has made life easier for me, a mere 99%er (what's worse, a 99%er in a third-world country!).
Someone else's selfishness is the reason you and I have a computer in front of us right now and we can afford to write stuff in an internet forum in our leisure time and still are able to afford food, clothes, cars and whatnot.
Now why would you want to do that?
See, design is mostly about using many elements to inform users and try to convey information with it. Normally, there's a tradeoff between content and form: since I deal mostly with web applications, I tend to tip that balance towards content, but try and do that in an elegant way so as not to render the overall ugly or too confusing, and so on.
One method I think could be a good way to convey more information in (web application/website) design is by informing the users what they can't do right now. It is a way to bring real-world constraints into your design and give more hints about what can be accomplished with that interface.
For example, suppose you have an application with a side bar that serves as a menu. In most (if not all) cases you'll need to run some code before rendering the menu so as to provide points of flexibility for different user types, different actions being available at a time.
For instance, you may have a use case where users can perform action A and action B, but they must do action A before action B, in other words they can't perform action B if they haven't performed action A first.
So you want to make a menu for an app that enables users to carry out action A and action B, and you will want to bring that constraint (no B before A) into your design. So you might have code like this:
if user has performed action A
show menu item for action B only
show menu item for action A only
This would work, and would give you a menu like this when the user hasn't yet carried out action A:
And one like this after a user has performed action A:
Now there's nothing wrong with that. Wouldn't probably deserve the best design of the year award but it does the job. I, however, think that you could put some more information on it. Now compare that with this:
You've effectively informed your user that both actions are available for him, but action B is momentarily unavailable. And, since the unavailable option is sort of grayed out, it doesn't make so much 'noise' on the screen. This is an effective technique to user for a variety of reasons:
introducing new functionality;
enforcing real-life constraints in your design (thereby reducing the need for a 'help' section(which nobody reads anyway));
conducting the user gently through a step by step process, in case you need one;
giving some background information about the system and what it can accomplish (if users just saw 'action A' like in the first example, they could think that your app doesn't support 'action B' at all, and could maybe leave it right then and there.)
P.S.: You could even add some tooltips to the controls, explaining further why they're not available at the time.