Categories: News

Why Do We Need Software Engineering?

To understand the necessity for software engineering, we must pause briefly to look back at the recent history of computing. This history will help us to understand the problems that started to become obvious in the late sixties and early seventies, and the solutions that have led to the creation of the field of software engineering. These problems were referred to by some as “The software Crisis,” so named for the symptoms of the problem. The situation might also been called “The Complexity Barrier,” so named for the primary cause of the problems. Some refer to the software crisis in the past tense. The crisis is far from over, but thanks to the development of many new techniques that are now included under the title of software engineering, we have made and are continuing to make progress.

In the early days of computing the primary concern was with building or acquiring the hardware. Software was almost expected to take care of itself. The consensus held that “hardware” is “hard” to change, while “software” is “soft,” or easy to change. According, most people in the industry carefully planned hardware development but gave considerably less forethought to the software. If the software didn’t work, they believed, it would be easy enough to change it until it did work. In that case, why make the effort to plan?

The cost of software amounted to such a small fraction of the cost of the hardware that no one considered it very important to manage its development. Everyone, however, saw the importance of producing programs that were efficient and ran fast because this saved time on the expensive hardware. People time was assumed to save machine time. Making the people process efficient received little priority.

This approach proved satisfactory in the early days of computing, when the software was simple. However, as computing matured, programs became more complex and projects grew larger whereas programs had since been routinely specified, written, operated, and maintained all by the same person, programs began to be developed by teams of programmers to meet someone else’s expectations.

Individual effort gave way to team effort. Communication and coordination which once went on within the head of one person had to occur between the heads of many persons, making the whole process very much more complicated. As a result, communication, management, planning and documentation became critical.

Consider this analogy: a carpenter might work alone to build a simple house for himself or herself without more than a general concept of a plan. He or she could work things out or make adjustments as the work progressed. That’s how early programs were written. But if the home is more elaborate, or if it is built for someone else, the carpenter has to plan more carefully how the house is to be built. Plans need to be reviewed with the future owner before construction starts. And if the house is to be built by many carpenters, the whole project certainly has to be planned before work starts so that as one carpenter builds one part of the house, another is not building the other side of a different house. Scheduling becomes a key element so that cement contractors pour the basement walls before the carpenters start the framing. As the house becomes more complex and more people’s work has to be coordinated, blueprints and management plans are required.

As programs became more complex, the early methods used to make blueprints (flowcharts) were no longer satisfactory to represent this greater complexity. And thus it became difficult for one person who needed a program written to convey to another person, the programmer, just what was wanted, or for programmers to convey to each other what they were doing. In fact, without better methods of representation it became difficult for even one programmer to keep track of what he or she is doing.

The times required to write programs and their costs began to exceed to all estimates. It was not unusual for systems to cost more than twice what had been estimated and to take weeks, months or years longer than expected to complete. The systems turned over to the client frequently did not work correctly because the money or time had run out before the programs could be made to work as originally intended. Or the program was so complex that every attempt to fix a problem produced more problems than it fixed. As clients finally saw what they were getting, they often changed their minds about what they wanted. At least one very large military software systems project costing several hundred million dollars was abandoned because it could never be made to work properly.

The quality of programs also became a big concern. As computers and their programs were used for more vital tasks, like monitoring life support equipment, program quality took on new meaning. Since we had increased our dependency on computers and in many cases could no longer get along without them, we discovered how important it is that they work correctly.

Making a change within a complex program turned out to be very expensive. Often even to get the program to do something slightly different was so hard that it was easier to throw out the old program and start over. This, of course, was costly. Part of the evolution in the software engineering approach was learning to develop systems that are built well enough the first time so that simple changes can be made easily.

At the same time, hardware was growing ever less expensive. Tubes were replaced by transistors and transistors were replaced by integrated circuits until micro computers costing less than three thousand dollars have become several million dollars. As an indication of how fast change was occurring, the cost of a given amount of computing decreases by one half every two years. Given this realignment, the times and costs to develop the software were no longer so small, compared to the hardware, that they could be ignored.

As the cost of hardware plummeted, software continued to be written by humans, whose wages were rising. The savings from productivity improvements in software development from the use of assemblers, compilers, and data base management systems did not proceed as rapidly as the savings in hardware costs. Indeed, today software costs not only can no longer be ignored, they have become larger than the hardware costs. Some current developments, such as nonprocedural (fourth generation) languages and the use of artificial intelligence (fifth generation), show promise of increasing software development productivity, but we are only beginning to see their potential.

Another problem was that in the past programs were often before it was fully understood what the program needed to do. Once the program had been written, the client began to express dissatisfaction. And if the client is dissatisfied, ultimately the producer, too, was unhappy. As time went by software developers learned to lay out with paper and pencil exactly what they intended to do before starting. Then they could review the plans with the client to see if they met the client’s expectations. It is simpler and less expensive to make changes to this paper-and-pencil version than to make them after the system has been built. Using good planning makes it less likely that changes will have to be made once the program is finished.

Unfortunately, until several years ago no good method of representation existed to describe satisfactorily systems as complex as those that are being developed today. The only good representation of what the product will look like was the finished product itself. Developers could not show clients what they were planning. And clients could not see whether what the software was what they wanted until it was finally built. Then it was too expensive to change.

Again, consider the analogy of building construction. An architect can draw a floor plan. The client can usually gain some understanding of what the architect has planned and give feed back as to whether it is appropriate. Floor plans are reasonably easy for the layperson to understand because most people are familiar with the drawings representing geometrical objects. The architect and the client share common concepts about space and geometry. But the software engineer must represent for the client a system involving logic and information processing. Since they do not already have a language of common concepts, the software engineer must teach a new language to the client before they can communicate.

Moreover, it is important that this language be simple so it can be learned quickly.

techfeatured

Recent Posts

The Benefits of Partnering with an IT-Managed Service Provider for Your Business

Table of Contents Introduction to IT Managed Service Providers Why Outsource IT Management? Cost-Effective Solutions…

3 months ago

Choosing the Right Thresholds for Your Home: A Comprehensive Guide

Key Takeaways: The importance of selecting the correct thresholds for different areas in your home…

4 months ago

Innovative Railing Gate Solutions for Modern Homes

Key Takeaways: The variety of railing gate designs can significantly enhance the aesthetic appeal of…

4 months ago

How To Choose the Perfect Vehicle for Extended Commutes

For many, commuting is an unavoidable part of daily life. But when that commute extends…

5 months ago

The Future of Mobility: Innovations in Automotive Technology

When you're on the road, you want to feel safe, comfortable, and like you have…

6 months ago

Clean Air Starts at Home: Tips for Maintaining Indoor Air Quality

Key Takeaways: Understanding the significance of indoor air quality. Identifying common pollutants in your home.…

6 months ago