-
Notifications
You must be signed in to change notification settings - Fork 2
What is wrong with programming
As been said in what is programming, software creation is quite similar to any other engineering: we create tools and machines to do different tasks for us, but in a very different environment. This is because the result of the programming process is
- non-material: contrary to a hammer, a lamp or a chair, where each and every instance must be created from raw materials with repeated processes, a software can be copied, reused without any count and location limit. If I write a memory game, it can be copied and played by unlimited number of people anywhere on the planet, without any "creation" action that would be required for the same memory game with paper cards.
- non-aging: all the material goods age, get worn, require care to extend their life cycle but finally break. A software is not affected by the time in this way: if we have the hardware that can run our software, it runs exactly the same way millions of times without aging, and will never "break". It gets unusable only if we can't provide the runtime environment (hardware and service software components) anymore.
So, if you really solve a problem well, you have solved it for anyone who can run that software, as long as they have the environment that you used. In "business language": if you do your job well in programming, you lose your job. (If this sounds weird, just remember that Windows XP was "too good" to be replaced by Vista, resulting a serious disaster to Microsoft).
Today we laugh at those who seriously underestimated the possible number of computers and denied the possibility of having personal computers at all, but they were right at their time: it was totally irrational to draw serious amount of money into a field that has such a ridiculous "business model". They were serious, honest and conscientious people, so they could not imagine the "solution".
Software industry had invented the "artificial software aging" by changing the environment all the time: creating new operating systems, end supporting "old" hardware, end supporting new hardware with drivers in old operating systems and ensuring that old software will not run in the new environment. New frameworks and generated new requirements on the end user side can only be fulfilled by new and newer software again – so users pay programmers to solve the same problem every year.
This is a shameless waste of client resources, but a source for IT business – and it is totally indifferent if the user pays for the development
- directly: for the software company (big names like Microsoft, Oracle, ...), in the software price, consultancy and support agreement;
- indirectly: to the "gurus" who can manage and hack together the different Linux versions and tons of similar "free" gadgets – and who do the bazaar development "in their free time" while sitting in the sysop chair in a company;
- or more indirectly: pay as marketing cost everywhere because factories, service providers, agencies, etc. pay to Google...
Our IT industry is the true exhibition of "science fiction". Just a few examples:
- while chasing "customer needs", the CPU clock speed went from 4.7MHz to the GHz regions with multiple cores, say they became 4000 times faster. I started programming on a machine that had a hard drive of 20MB capacity, now I have 2GB RAM... Yes, today we can watch HD videos and count the hair in Shrek's ear - but the boot time remained almost the same... is this what the customer really needs?
- just think of those ridiculous terms like "intellectual property" and current patent and copyright wars. Ideas are to be shared, discussed and used - not to be owned, traded or denied. This is totally insane from any other viewpoints except that business model.
- programming is not related to creativity anymore - it's just another factory where you have to hack together big and buggy components somehow, heavily reading the tons of howtos on the internet and keep up with the latest acronyms and buzzwords. I full agree with HalfSigma explaining why a career in computer programming sucks and talking about the death of the generalist software developer.
- we call this era the age of informatics, and this is true when you check the latest GAMES. But do you see a correct public health care system globally available for ALL patients? Do you see a single e-paper based reader with a memory card containing ALL the books for ALL the years, and touch screen exercise book in your childrens' schoolbag - or 5kg paper books that becomes simply garbage the next year, but distort their backs today?
- in this "information age", the greatest companies are the information garbage recycling facilities and user time wasters (namely: Google, Facebook, game companies, etc.) who also sell their users data, habits, passions for money. What?
Yes, information age, but medieval times...
We have different operating systems. They all define "the computer" (keyboard, mouse, file system, display, etc.), as the running environment – yet this definition is all too obsolete today. At the same time, they are different, and this appears in the code – or we can use bigger frameworks or virtual machine based languages that define "the computer" themselves and provide the same runtime environment on different operating systems.
We use programming languages that support both data declaration and algorithm implementation – so they "eat" our solution and make it hard to reuse in a different language. When we have to create a system that runs in multiple environments, it goes totally out of control: huge server components or cloud implementations, both poison the user architecture and code with their content.
We use HTML/Javascript to create a remote GUI, which is hard to produce and even harder to make it look and behave the same in different browsers and operating systems on the client machine. The HTML language and the browsers were not designed for this task at all – and we use a pathetically wrong approach just because we could not create a commonly accepted way for GUI declaration and rendering... but, we can use big frameworks again for this task (and be unable to use an otherwise perfect component if it uses a different framework).