Welcome to queirozf.com
These are some of the relevant factors that have contributed to the rise of new concepts like Internet of Things (IOT for short) and Big Data - terms that have since left the realm of academia and entered the mainstream.
IPv4 to IPv6 Transition
We are right in the middle of a large transition from old-fashioned IPv4 to IPv6, but what does it mean for us? In comparison with IPv4, IPv6 supports 10²⁸ as many endpoints!
This means that it will be possible for every single device (no matter if we're talking about thousands of heat sensors in a forest) to be connected to a network interface - being able to send and receive data from potentially any other Internet-ready device in the world. Any tiny piece of hardware could, theoretically, be uniquely identifiable via an IP.
IPv6 supports 10²⁸ more addresses than IPv4
Explosion of Devices and Data
In addition to the wider amount of addresses available through IPv6, the cost of hardware has gone down over the last few years, while newer and faster CPUs and hard disks have been developed.
This has contributed to what is being called the commoditization of processing power and storage space.
Last year (2013), there were over 10 billion connected devices, and this number will climb as high as 50 billion by 2020, according to an estimate by networking equipment maker Cisco. source
The hockey stick effect
GIS, short for Georaphical Information Systems is the umbrella term for systems whose objective is to store large quantities of coordinates and/or some extra information related to them.
With the increase in the number of mobile and handheld devices, as well as the aforementioned explosion in the number of overall (including static) devices, it has become ever more convenient to store event locations and/or user actions as defined points in time and space in GISs.
Sensors are becoming economically viable for many industry sectors such as manufacturing, agriculture, energy generation and so on.
Each sensor typically emits data at a predefined rate or when some threshold conditions are met. This means lots of data gets sent to a database and needs to be acted upon, sometimes even in real time.
User-generated content is rising to heights never before seen, now that large populations (which until very recently didn't have access to the Internet) are becoming regular Internet users all around the world.
It is hard to evaluate if social media has been more of a consequence of this phenomenon than one of the causes thereof, but social networks are among the organizations where most data is being kept nowadays - many popular open source tools for big data manipulation originated in places like Facebook, Google and Yahoo.
Disk space has become so cheap that most devices and applications are configured to log everything that can be logged in the off chance it might some day, somehow, be useful for someone.
The sheer scale of the data required for and new developments in monitoring IT infrastructures with traditional SIEM (Security Information and Event Management) solutions has been prompting changes for all but the most naive of these systems, and most of these changes involve dealing with and analysing large data sets, hence the connection with the whole Big Data movement.
Big Data is changing the landscape for SIEM providers; in most cases it's not just a difference of scale - just throwing bigger and faster hardware just won't do.
Some of the issues that arise in the day-to-day operation of such systems are as follows:
Long Time Horizons
Data (in the form of logs, mostly) needs to be stored for increasingly long periods of time because sometimes the context is what separates a real threat from false positives.
One small incident is perhaps not relevant if it happens only once but the same issue happening every day for six months might be indicative of something lurking around the corner.
This means that an effective SIEM system needs to have elements to detect and act upon these APTs (Advanced Persistent Threats).
Most SIEM solutions are based on a traditional, relational Database Management System, which are not meant for this type of large, unstructured and relatively static data.
Inconsistent Data Formats
The sheer variety of log types and formats presents, in and of itself, a challenge for traditional SIEMs which are generally based upon database systems which really need some sort of regularity to the data. Companies are trying to move away from having to define each new log format in terms the underlying persisting layer can understand.
Store Once, Read Multiple Times
Logs and other types of monitoring information (both real-time and otherwise) aren't meant to be edited or changed in any way. They are mostly timestamped and automatically generated by devices and/or applications.
Many companies therefore find themselves using technologies meant for other types of data, which further contribute to aggravate the problem.
Not Knowing what to Look For
Users don't always know what they must look for when trying to establish a correlation between different events (now and/or in the past); maybe after an incident has taken place they want to carry out a forensic examination.
SIEM solutions must allow for ad hoc reporting and visualization so that end-users can use the system in ways the original designer didn't think about.
Stretching this notion a little bit, we can see many users using their SIEMs as some sort of log search engine which provides unopinionated visualization for the logs, providing tools for users themselves to see correlations and connections between the data sources rather than doing it itself.
Similar Data that Doesn't Look So
Different devices sometimes describe data in specific ways that makes it extra difficult for systems to determine what's similar and what's not.
For example, you might have two firewalls in your network and one logs drops as
DROP: <IP> <TIMESTAMP> and the other logs it as
DENY <TIMESTAMP> <IP> or something like that. Systems need to be able to infer similarities like these and treat them as a single entities (Firewall Drops) and smooth out small noise like this.
In today's tightly regulated and interconnected world, it's very useful to have ways to shield yourself (perhaps even benefit) from the cost of complying with regulations and making sure security incidents are, to the best of your ability, prevented.
Logs and security event information are key areas to leverage if you want to stay ahead of other businesses in your area. The sheer volume of logs and event data collected from all sorts of devices has increased sharply over the years, due mostly to decreased costs in hardware (hard disks and memory, basically).
SIEMs have, for the last years, been the preferred way of keeping track of such information in the workplace, but it's not always easy to justify investing in such products when the benefits are not so tangible.
We (IT Management) are generating as much, if not more, data within our enterprise than our actual business units are.
Here are some of the quantifiable benefits of installing a SIEM solution at your organization.
SIEM solutions may be the most cost-effective ways to comply to regulations and they can protect you from fines and/or lawsuits.
Increasing Efficiency/Slashing Downtime
SIEMs can, in addition to their obvious security value, help you visualize infrastructure bottlenecks and points of concern, due to the way they make information available to you.
This can even impact other business areas that, in one way or another, could make use of data that is shown on SIEMs for security purposes only, such as marketing, development and senior management.
More Effective (Centralized) Log Storage and Visualization
Easier and cheaper to train staff (from different backgrounds - compliance, development, system administration and so on), since they will all be using a single system - a single interface - for log viewing, troubleshooting and forensic analysis.
Adding a Layer Between Viewing Logs and Having Access to the Machines where these Logs are Generated
You could use a SIEM solution to allow people to view (and even analyse) logs while not letting them having root and/or admin access to the underlying infrastructure.
Identifying Hitherto Unknown Usage Patterns
SIEM systems can be used for general data exploration. This is particularly the case when users do not know what they are looking for.
Users do not always know what they are looking for.
If the system provides easy-to-use data visualization and manipulation facilities (charts, graphs and tables), users can find out novel ways to derive value from SIEM solutions - ways the original designers never envisioned.
With all logged data (from across your whole IT infrastructure) neatly organized and classified (this being one of the main attributes of a good SIEM solution), it is just a matter of creating a simple script to harness that data and produce an executive summary with graphs and explanations to help senior management make clear decisions.
BrightTalk Webinar: Using SIEM/Log Management to Achieve Significant ROI (Might require free Registration)
What made America great?
First of all, I would like to say that, while not being American myself (unless you count American as those born in the American continent), I'm deeply admiring of American values and history. More than those of my own country.
I wasn't born in America nor live there but I feel every bit as American as those who do.
I consider countries to be largely arbitrarily defined borders on land thereby not being a separate entity as opposed to just its people, but I will nonetheless use the term America to refer to most of the people who inhabit the land but also those who just see themselves as American and share its values like myself.
I wasn't born in America, don't live in America and don't hold US citizenship, but it is the nation and set of values I somewhat identify with so I see myself every bit as American as someone who happens to have been born there.
I think that America as a set of ideals and values is much more
relevant than America the nation-state.
The historical period I'm focusing on right here is basically 19th and 20th century America.
Disclaimer: I know I am referring to stereotypes and groups of people but I do so trying to explain things as we see them today. I know that not all Americans are like this and that not all those who are not American are not like this. People are individuals, not the groups they belong to. I also know that there were other groups that probably also helped, but I am listing those which I think were the most important.
America in its infancy was a blend of very hard-working and able people, like Germans, Englishmen and women, Central and Northern Europeans, Jews and also Italians and other Mediterranean Folk. More recently, East Asians and Indians have shown themselves to be very high performing and hard working people too.
The fact that most Americans aren't indigenous but rather came to America from other places is probably a very very strong natural selector, selecting individuals that, living in other oppressing and backward countries, decided to do something about it rather than wait. This is not dissimilar to entrepreneurs who take risks to produce value for themselves and others.
Immigration is highly selective. Those who immigrate to other countries are, by definition, people who, rather than wait for their situation to improve on its own, take their destinies (and their families') into their own hands and do something about it. Immigrants were entrepreneurs way before it was trendy.
Americans are naturally people who do rather than wait. Perhaps because if you come from an immigrant background like many Americans (early adopters if you will) you are already out of your comfort zone.
This means that most Americans have (or at least had at some point) a deeply ingrained will to achieve through their own efforts is representative of their work ethic.
They rarely expected or felt entitled to having things given to them other than those trade for their work and creative abilities.
Abundance of Land
Having an abundance of land (mostly good for farming and/or mining) could have proved a mixed blessing (picture oil-rich african and south american countries, ripe with corruption and demagogues promising miracle solutions and/or failed ideologies to their people).
Fortunately, in the American case, it seems it hasn't. Although we obviously have no alternative history America to server as a control group, America is obviously as good as, if not more advanced than, most countries on Earth.
An abundance of land has helped America become, in addition to a great industrial power and the home of the world's best universities, one of the world's most productive agricultural economy.
Correlation does not imply causation but logic does. While government is useful in times of war and to provide basic infrastructure, history shows us that outsourcing services to government is extremely inefficient and drains a country of its resources fairly quickly.
Aside from moral implications and the risk of tiranny, large governments stifle growth and reduce incentives for entrepreneurs and businessmen and women to experiment and find new ways of providing goods and service more cheaply, efficiently and with higher quality.
Aside from moral implications and the risk of tiranny, large governments stifle growth and reduce incentive.
As the references I've collected (1 and 2) as well as many other sources show, America was had a relatively small government (as measured by percentage of GDP) from 1900 up until very recently (although it is still a little smaller than European-style social democracies).
Prior to 1900 (18th and 19th centuries) you can imagine it was even smaller than this.
No Natural Enemies (other powerful nations as neighbours)
This is somewhat dubious but I tend to think that America's lack of enemies (particularly early on in its history) was overall a positive influence in America's greatness.
Those who argue otherwise hold that powerful enemies can bolster one's industries, motivation and resolve, as shown by the number of inventions people come up with during wars, which is certainly true.
I, however, think that "external threats" (real or otherwise) are too often used by those in charge to justify opression and impopular measures aimed at their people.
Governments around the world use the threat of an external enemy to convince their people into giving up their rights.
It would be extremely hard for politicians in America (back in the day when communication wasn't very developed; nowadays it's a different picture altogether) to justify any kind of opression and pushing of particular agendas with an appeal to "favour security over freedom" owing to supposed external enemies, due to its unique position as the sole power in North America.
W.I.P.: This is a work in progress
If you spend most of your time coding Information Systems and if you develop software within an MVC framework/mindset, the most stable, clean and reliable parts of your application are probably your database tables.
We usually build our applications (that is, add complexity) around these datatypes. We model our domain classes after the database (this is very likely the case if you're using a design pattern like Active Record), and we think and reason about our domain over a cup of coffee with our database diagram in our hands.
We can't escape complexity. We can manage it and control it and
understand it, but we can't escape it because we are modeling complex
behaviour and processes.
I think that I should make something clear: there are two types of complexity:
- There's inherent complexity in all software systems which we cannot avoid and, without which, our application is basically worthless. Only complex problems are worth solving. Software systems need to be complex in order to justify their existence. You can't avoid this.
- There's accidental complexity too, which is complexity that doesn't add any extra value to your users. Under this category we can put things like bloated code, duplicated behaviour, over-generalized classes, large methods and so on. This is complex code that will make your software harder to change and extend, and won't you get you any extra money. These are things which can (and should) be understood and tackled.
Where Complexity Builds up in Information Systems
It is just too easy to add a method to a model class. This may well be the most serious form of complexity creeping into your system, as it is also the most difficult to change because its changes will affect the whole application. If this becomes too unwieldy to maintain and extend, you're probably looking at a major overhaul ($$ and time) that will probably affect and/or break your controller too (probably not your views though).
It is not so bad to add many actions (that is, separate routes that each render a separate view) to a controller; this is the way it's supposed to be.
Views tend to get very complex and bloated when you try to do too much in a single view/template (I'm talking mainly about views where some sort of interactive action takes place, i.e. forms basically).
New models make themselves needed, unused models die out throughout the lifetime of an application.
Models always change through the lifetime of an application: new attributes (and models themselves) make themselves needed, unused attributes and models tend to die out as new, more effective ways to model your domain are figured out. This means that your views need to be constantly updated to match the moving targets which are your models.
From my experience, it's better to have many simple views than a just a few with a lot of functionality each, as they are much easier to keep track of. You can always refactor common code that turns up across different views.
We generally don't pay as much attention to these as to the main code in our application (models mainly), as they are seen as some sort of second-class citizens. Their main objective is just to support the application and extract code that is not specific to our domain from our app.
Needless to say. Not many of us take part in projects where the cleanliness and organization of testing code becomes a problem.
Ways to Mitigate it
Extract code to helper classes and methods if it starts to get unwieldy.
When you first need to do something with a model (for example in a controller in order to select some data to display) somewhere, write that code there(i.e. where it's actually used). If it's more than, say, 20 lines or so or if it is needed by other parts of your application, extract to code to a helper method. Only if the helper method becomes too large and overgeneralized should you move this code to the actual model class.
Helpers are code too!
We can't fall prey to the notion that helper classes are just somewhere we put ugly, ad hoc code we are ashamed of.
We never know how big our helpers will turn out to be so it's difficult to think up a clear solution up front.
One helper per controller - each controller only talks to its own helper!
What could be done is to create one helper per controller, and use only those in each controller. If your controllers usually map to a domain class, it could help to extract all ad hoc code to a helper that's exclusive to that controller. If you need access to outside classes, keep those calls in your helper - and your controllers clean.