Showing posts with label software engineering. Show all posts
Showing posts with label software engineering. Show all posts

Saturday, February 7, 2015

Software Construction Fundamentals

As with all activities in software engineering, construction is based on a set of fundamental concepts which serve to establish goals and drive much of the work that is performed.  Following is a brief introduction to these concepts.

Minimizing Complexity

In construction, more than any other activity, minimizing complexity is utterly essential.  Fortunately, there are many guidelines to help us achieve this goal, including but not limited to the following:

  1. Code should be simple, readable, and understandable rather than clever or overly compact.
  2. Code should exhibit good “code hygiene.” This means it uses meaningful names, follows principles and best practices, contains sufficient and accurate comments, contains few or no public attributes, uses constants rather than hard-coded values, contains no unused variables or functions, minimizes code duplication, provides consistent interfaces to classes, and so on. 
  3. Code should be constructed using a modular design. 
  4. Programmers should follow a set of standards or conventions for the style of their code. Coding conventions can include guidelines such as capitalization, spacing and indentation, comment styles, names of different types or categories of variables, the organization of method parameters, and more.

Anticipating Change

Someone once said, “If a program is useful, it will have to be changed.”  This may seem backwards at first glance, but it is true that if your program is being used, people will eventually find problems with it, think of things it could do better, or encounter unexpected situations that the program was not designed to handle.  Therefore, programmers should always think ahead to what kinds of changes may be required in the future, and try to structure their code so that changes can be made with minimal disruption to the original design and functionality.

Constructing for Verification

Programmers should consider how difficult it will be to test the code, and try to structure it in such a way that it can be easily and conclusively tested.  Recommendations for accomplishing this goal include:

  • Follow coding standards that are specifically intended to support unit testing.  This usually implies a modular design and the Single Responsibility Principle.
  • Perform regular code reviews with a focus on testability.
  • Organize code to make it compatible with an automated testing tool.
  • Use the simplest possible programming language control structures.

Reuse

Reuse comes in two different forms:

Constructing for reuse means developing the code in such a way that it can be reused in new systems. Code for this purpose must be general, cohesive, and easily decoupled from its original context if necessary.  In other words, it should follow the same principles of design that were discussed in my posts on Fundamental Software Design Concepts and Software Design Principles.  Code constructed in this way provides a high return on investment, because once it is developed, it can be used again and again without repeating the development costs.

Constructing with reuse means integrating existing code with new code to create a new application.  Existing code could be something developed internally by your own organization, or it may come as part of a third-party library.  One advantage of this is the likelihood of increased reliability.  If code has been used successfully in previous projects, then it is likely that many of its defects have already been found and corrected.

Saturday, January 31, 2015

Software Design Principles

In software design, there is a direct, strong, and undeniable correlation between principles and quality.  These principles guide us to create good designs which become good code which becomes good software.  It is absolutely worth your time to understand these principles and remember them.
There are many more principles than these, but the ones listed here are solid, proven, fairly well known, and will guide you to create strong designs.

The Single Responsibility Principle says that each class and each function should have one single, specific, and well-defined purpose, and it should never do anything that is unrelated to its purpose.  This principle is essentially just a reminder of the importance of cohesion.

The Hollywood Principle says, “Don’t call us, we’ll call you.”  What it means is that some classes should serve a high-level purpose and be the ones that drive processes, while others should simply perform the low-level tasks and nothing more.  High level classes should invoke methods of lower-level classes, but never vice versa.

The Principle of Least knowledge says, “Only talk to your immediate friends.”  When diagramming your design in a UML class diagram, each class should only invoke methods of classes that are directly connected to it.  This provides many advantages; one of the most significant being that it greatly reduces coupling.

Occam’s Razor is a well-known scientific principle that we can modify slightly to fit the purpose of software design.  It says if two different designs are both satisfactory, then the simpler one is better.

The Open/Closed Principle says that classes should be open to extension but closed to modification.  The idea is that code should be designed so that new functionality can be added without changing the existing, working code.

The Liskov Substitution Principle says that if you can design your classes to depend on abstract classes and interfaces, then any subclass of those can be substituted in later regardless of how it implements the functionality behind the scenes.

Favor composition over inheritance.  In many (but not all) cases, using composition for code reuse works just as well as using inheritance.  Composition has the advantage of looser coupling, improves reusability, simplifies testing and maintenance, and provides numerous other benefits.

Separate the aspects that vary from the aspects that stay the same.  If your design must work within several different contexts, identify what is different between those contexts and what stays the same.  If you can separate the part that stays the same, then you only need to worry about adapting the part that differs.

Saturday, January 10, 2015

Fundamental Software Design Concepts

As with all areas of formalized knowledge, the field of software design is built upon a set of fundamental concepts.  These concepts give us a list of general design goals, and when we combine them in meaningful ways, they produce a set of tried-and-true principles for creating high-quality software designs.

Abstraction means that every class should only contain the data and functionality that it needs to serve its purpose – no more and no less.  Abstraction takes different forms, but it is generally something that we want to strive for.

Coupling means there is a dependency between two or more classes.  We want to minimize this for several reasons:  It makes it difficult to reuse code and complicates testing and maintenance efforts.  We cannot eliminate coupling completely because it is necessary for classes to collaborate.  However, we should aim to minimize it whenever possible.

Cohesion is a measure of how closely the members of a class are related.  A class should have one single well-defined purpose, and everything it contains should contribute to that one purpose.

Decomposition means that large, complex things should be divided into smaller, simpler things.

Modularization means that each component should have a specific, non-redundant purpose and well-defined interfaces.  This is related to decomposition.

Encapsulation means that the details of what an entity is made of (its variables and functions) are bundled into a single unit, such as a class.

Information hiding means that the encapsulated details of a class can be hidden from external entities for the sake of simplicity as well as reliability.  This is related to encapsulation.

Separation of interface and implementation means that the interface to a class should be separate from how it actually works behind the scenes.  This is a specific type of information hiding.

Sufficiency means that a component contains enough of its essential features to be usable.

Completeness means that a component contains everything that it needs and nothing more.

Primitiveness means that the design should be based on patterns that are easy to implement.

Separation of concerns means that stakeholders can focus on just a few issues at a time rather than trying to grasp the entire system at once.


Saturday, October 18, 2014

I Never Thought I'd Love Requirements Engineering

Anyone who knows me or has read my blogs knows that I'm a software engineer with a sick level of passion for the subject.  Software engineering is comprised of a variety of disciplines and knowledge areas, so it's only natural that one would be attracted to some areas, repulsed by some, and remain neutral to some.

A current grad school class (CSCI715 at North Dakota State University with Dr. Gursimran Walia) required us to read two papers about requirements engineering, which has always been a topic that I acknowledge as an utterly necessary activity, but not one that I found particular enjoyable.  However, as I read these papers, I found myself being drawn in and becoming rather excited about some of the research directions in this area.  (I know I'm a nerd.  Shut up.)

Requirements engineering (RE) is arguably the most critical activity in a software engineering effort, possibly second only to construction.  The aim of this process is to develop a clear understanding of how a new software system is expected to behave and perform.  The importance of this cannot be overstated, for without this understanding, other activities cannot be completed with certainty.  Design, development, testing, configuration management, deployment, and practically every other software engineering activity depends heavily on the presence of a complete, accurate, verifiable, and clear specification of requirements.  Without an understanding of the expectations, we have no way of knowing when the work is finished.  We would have no means of assessing the correctness of the resulting behavior.

Legendary software engineer Frederick P. Brooks, in his paper No Silver Bullet: Essence and Accidents of Software Engineering, writes:

The hardest single part of building a software system is deciding precisely what to build. No other part of the conceptual work is as difficult as establishing the detailed technical requirements, including all the interfaces to people, to machines, and to other software systems. No other part of the work so cripples the resulting system if done wrong. No other part is more difficult to rectify later.  Therefore, the most important function that the software builder performs for the client is the iterative extraction and refinement of the product requirements. (Brooks 1987)

Given the magnitude of these implications, it is expected that software engineers would make efforts to improve the efficiency and effectiveness of the requirements engineering process.  Two papers that propose such improvements are Requirements Engineering: A Roadmap by Bashar Nuseibeh and Steve Easterbrook; and Research Directions in Requirements Engineering by Betty H.C. Cheng and Joanne M. Atlee.

Nuseibeh and Easterbrook begin in a typical fashion by justifying the importance of requirements engineering and then analyzing a formal definition of it.  Then they break away from the pack in a variety of ways:  First, they argue quite effectively that RE involves more than just business and technology, but that it also depends on psychology, anthropology, sociology, linguistics, and philosophy.  Second, they propose requirements management and communication as integral umbrella activities.  Though this appears to be stating the obvious, it is an area that many authors neglect to adequately address.  Also, rather than just providing simple examples of the primary RE activities, they include a moderately detailed list of approaches for accomplishing each one.  Finally, and perhaps most significantly, they reorganize the classic paradigm into more specific categories:  eliciting, modeling and analyzing, communicating, agreeing, and evolving.

The first of these activities is elicitation, the process of extracting software requirements from the people and environment surrounding the need for a new software system.  Nuseibeh and Easterbrook recommend beginning the elicitation process by identifying system boundaries, the results of which will continue to influence further elicitation efforts.  To accomplish this, they advise requirements engineers to identify the stakeholders, elicit their business and technical goals, and explore a series of scenarios including the way things are done before the new system is available and the ways that the new system is expected to change the performance of these tasks.  The techniques proposed to elicit this information include traditional data gathering, joint application design, prototyping, modeling, cognitive knowledge acquisition, and various contextual techniques.

Proceeding forward from elicitation is analysis, the process of solidifying, refining, and organizing the requirements obtained through elicitation.  The most common and effective technique for performing analysis is to create models, upon which Nuseibeh and Easterbrook place such emphasis as to rename the process with it -- “Modelling and Analyzing Requirements”.  The authors describe the use of models for analysis in five distinct categories:

  1. Enterprise models that document an organization’s structure, operations, and goals;
  2. Data models to represent details and relationships in the information that a system must produce, process, and/or store;
  3. Behavioral models to explore the functional activities of people and systems;
  4. Domain models to capture important aspects of the business, technical, and/or physical environment in which a system will operate; and
  5. Non-functional requirements models to express the quality attributes of a system.

Throughout all of the phases in RE, communication intermingles with other umbrella activities.  Communication is primarily documentation which takes numerous forms depending on the nature of the requirements being communicated and the intended audience.  Proper documentation simplifies the process of requirements management by enabling requirements traceability (RT), which is the ability to track a requirement from its original elicitation through implementation and beyond.

One key thing that Nuseibeh and Easterbrook handle very differently from most authors is their treatment of verification, which is an attempt to ensure that requirements are correct and complete.  Instead, they refer to the concept of agreeing requirements, which concentrates more on validation than verification.  Validation is the task of ensuring that the requirements statements accurately reflect the true needs of the stakeholders.  Agreeing requirements involves making sure that the stakeholders all agree with the requirements statements, and may possibly involve some negotiation techniques to reach consensus.

Finally, Nuseibeh and Easterbrook confront the inevitable problem of change by including a discussion on evolving requirements.  The process of evolving requirements is really just a form of another software engineering activity -- change management, which involves expecting change and following an established procedure for responding to it.  Since requirements can change at any point, the process of evolving requirements incorporates all other phases of requirements engineering, possibly requiring an engineer to revisit phases that have already been performed in order to successfully incorporate the new changes.  Evolving requirements exemplifies the need for requirements traceability and demands, at the very least, a return to elicitation and analysis.

Whereas Nuseibeh and Easterbrook’s paper focused primarily on current practices in requirements engineering, Cheng and Atlee’s paper looks to the future by exploring and summarizing research directions.  Essentially, they begin with an explanation of why RE is difficult, summarize the current state of the art of RE research, and then proceed to identify future research areas to help manage the difficulties identified.

Cheng and Atlee do an exceptional job of describing the purpose of each RE phase; presenting each one with a concise definition and justification for its existence.  By concentrating on each activity’s intent, the authors draw clear connections to the types of solution-based research that most directly benefit each task.  Elicitation involves identifying requirements, so most research in elicitation focuses on improving the precision, accuracy, and variety of those requirements.  Modeling involves creating abstract representations of requirements, so research in this area involves improvement of scenario-based models and techniques for manipulating them.  Analysis involves the refinement and organization of requirements, so research in this area attempts to improve evaluation techniques.  Validation ensures that requirements accurately reflect the needs of stakeholders, so research into verification techniques deals mostly with communicating the requirements to stakeholders in a clear way so that an accurate assessment can be made.  Requirements management comprises numerous responsibilities, so the research areas here cover a diverse array of concerns:  automation of RE tasks, analyses of requirement stability for the purpose of isolating those most likely to change, and techniques for organizing large numbers of requirements.

Following the discussion of solution-based research, the bulk of the paper concentrates on evaluation-based research, in which there are a number of research strategies which the authors first summarize and then discuss in detail:

  • Paradigm shift: Radically new ideas change the way of thinking.
  • Leveraging other disciplines:  Analogies to other disciplines are drawn to help find solutions.
  • Leveraging technology:  New technology is applied to solving RE problems.
  • Evolutionary:  The state of the art is advanced by incremental but meaningful improvements.
  • Domain-specific:  A problem is solved in a way that applies to a specific application domain.
  • Generalization:  A domain-specific technique is generalized to apply outside of that domain.
  • Engineering:  RE techniques are simplified to make them accessible to practitioners and students.
  • Evaluation:  Existing RE techniques are assessed according to some benchmark or objective.

In addition to the detailed coverage of research strategies, Cheng and Atlee identified nine “hotspots” -- areas that are expected to have the largest impact on software engineering:

  1. Scale:  The size and complexity of software systems creates a need for improved modeling, analysis and requirements management techniques.
  2. Security:  New technologies present new security threats, which may or may not benefit from improved RE.
  3. Tolerance:  Some requirements are difficult to quantify precisely, so tolerances must be established to assess sufficient correctness.
  4. Increased Reliance on the Environment:  Systems are increasingly interoperable with other systems, so we need better ways to represent scope boundaries and interfaces.
  5. Self-Management:  Requirements and environment change frequently, and a self-managing system that can react and adapt would have numerous applications.
  6. Globalization:  Communication is difficult for globally distributed development teams.  There is a need for tools and techniques to facilitate collaboration and negotiation.
  7. Methodologies, Patterns, and Tools:  A need exists for improving the transfer of RE techniques from research into practice.
  8. Requirements Reuse:  Products from similar product lines have a large number of requirements in common.  Reusable requirements would shorten engineering timeframes and benefit from past experience, but must be adequately flexible to adapt to subtle variations.
  9. Effectiveness of RE Technologies:  The results of RE efforts are only useful if they are applicable to real-world problems.  Evidence of applicability is required in order to assess the value and utility of RE techniques and research.

The paper culminates in a thoughtful and inspiring list of recommendations, many of which involve collaboration and communication:  Researchers should work with practitioners, other researchers, and industrial organizations; repositories of artifacts should be jointly established; researchers should conduct evaluation research and proactively consider emerging technologies; and academics need to confer this knowledge to students of disciplines related to software development.

Quite a few of these research directions piqued my interest and caused me to reconsider my own selection of research area.  Cheng and Atlee’s description of the engineering research strategy rung with familiarity because it rephrased my own proposal for a book that I have outlined.  My concept involved leading the reader through the major SE lifecycle activities, describing the purpose of each one, and exploring ways to accomplish the objectives without the need to endure the complex, formal processes prescribed by most SE literature.  These simpler alternatives make software engineering much more attainable to the average developer or student, which conforms to the way Cheng and Atlee described engineering as a research strategy.

Of the nine hotspots, however, requirements reuse really grabbed my attention.  Again, the idea echoed in my mind as something for which I too have expressed a need.  As a game developer, I notice that the same basic requirements seem to be elicited repeatedly.  On the most basic level, it is fairly obvious that games within the same genre have many of the same functional gameplay requirements.  From a system perspective, each product released for a specific target platform shares many requirements with other products developed for that same platform.  Perhaps we can even consider this at a more general level, e.g. nearly all games involve similar multimedia needs, like graphics, animation, and audio.

This idea may even extend to non-functional requirements.  For instance, games generally place the highest priority on performance.  However, an MMO server also has a critical need for security, games that are intended to be expanded with downloadable content (DLC) have a need for extensibility, and the ever-increasing number of gaming devices demands portability.  I simply wonder if the possibility exists of establishing a set of core requirements that serve as a starting point for RE, and can then be extended, overridden, and supplemented to satisfy the unique needs of a new game product.

As I consider how one might go about researching the topic of requirements reuse in the domain of video games, I realize that there are a number of potential challenges.  First, we would need to define scope boundaries to provide focus for the types of requirements we will specify.  Second, this specification should be formed by experienced, professional game developers, preferable those who have completed multiple products.   Third, the requirements statements would need to be used in several projects so we have a valid sample size.  Fourth, we would need evaluation criteria for assessing the effectiveness of the requirements after the project had completed.  Finally, and most importantly, we would need a few different game development groups who are willing and able to incorporate our requirements statements, track the requirements accurately, and provide detailed, honest feedback on the outcomes.

Only after contemplating the content of these papers have I begun to feel genuine excitement about requirements engineering.  From this point forward, I am now considering requirements engineering research as a viable dissertation area.

Tuesday, October 7, 2014

An Introduction to Software Architecture

I teach an undergraduate course in software engineering that covers a lot of great information, but is disappointingly light on the subject of software architecture.  As a software engineer, I find it a bit disturbing that such a vital topic is handled so summarily.

At the request of my students, I put together a supplemental presentation on the topic.  When asked if I had any notes of my talk, I had to admit that I did not.  So, for the sake of my students and whoever else is reading this blog entry, here is a written form of my supplemental presentation on software architecture.

Introduction

When you think of the word “architecture,” what comes to mind?  You might imagine the styles in which buildings are designed.  Some might think of electronic devices and how all of their parts work together.  Others might think of how computing systems are connected in a network.

All of these are examples of architectures, and there are many more.  We software engineers are concerned with software architecture, which shares several commonalities with the types of architecture mentioned above:  Software architectures have recognizable styles.  Software architectures describe the parts and their purposes.  Software architectures show how the parts connect and communicate.

Many definitions exist for what software architecture is.  Some are casual and informal while others are precise and detailed, but they all generally agree on the main point:  It is a representation of the major components of the software and how they are connected.  It is a framework that describes the overall structure of a software system and how all of the parts are integrated together into a single, cohesive product.

Architecture is not concerned with how the components will perform their functions.  Rather, architecture is only concerned with what pieces will exist and how they connect and communicate with each other.    Architecture encompasses the structures of a system, supports general properties and qualities, and concentrates on the behavior and interaction of elements.

Given this description, architectural design may seem like a simple task, but many different styles of architecture exist and more than one may be suitable for a given application.  Each has its benefits which can help guide the decision-making process.

The Importance of Software Architecture

It is widely accepted that a carefully considered and well documented architecture provides the highest return on investment of any software engineering work product.  The reason it has such a high payoff is because of the number of benefits it provides as well as the significance of those benefits.  There are three key reasons why architecture matters.

First, it improves communication among stakeholders.  As you may know, communication is considered one of the greatest challenges in software engineering.  Since architecture can be represented using a variety of textual and diagramming notations, it can take on forms that are useful to people of diverse backgrounds.  This enables communication between people who may otherwise find it difficult to do so.  Also, architecture is documented in a variety of views which show different aspects of the system.  This means that stakeholders can view the architecture from a perspective that makes sense to them and addresses their specific concerns.

Second, it captures design decisions that will influence nearly all of the software engineering work that follows.  Here are some examples, and there are numerous others:

  • Best practices suggest that the configuration manager should maintain a directory structure that roughly mirrors the architecture.  This simplifies maintenance by helping to pinpoint the precise location of faults as well as guiding various decisions during deployment.
  • Software designers frequently come up with multiple designs that would solve a given problem.  An understanding of architectural priorities helps them to select a candidate design that would best support the overall quality goals of the system.
  • The development team who constructs the software makes countless decisions through the development process, and many of these decisions are made easier by understanding the architecture.  It also divides the system into distinct elements which the development team can prioritize and use as a basis for metrics, planning, and estimation.
  • The testing team can refer to the architecture to identify areas of concern, and testing coverage can be planned accordingly.  Also, testing priorities are more easily established when influenced by the architecture because testers can focus their efforts on the qualities that are most important to stakeholders.

Third and finally, architecture provides a relatively small, simple, and reusable model of the overall software system.  At this level, there are many similarities among the architectures of entirely different software.  Therefore, the same architecture can be reused in many different software systems.  Once an architecture has been used on a project, it can be reviewed for its effectiveness, and then reused or slightly modified for future projects that have similar nonfunctional requirements.  The presence of a simple model makes it easier to determine whether an architecture will work in a new scenario, or how to make any necessary modifications.

Influences on Software Architecture

The importance of architecture as a communication tool comes from the fact that it must satisfy a diverse array of needs, such as technical needs, business needs, and other input from stakeholders.  In a later section below, we will discuss how these concerns can all be incorporated into the decision-making process.

Technical Factors

Of course the architecture must allow for the inclusion of functional requirements – the specific behaviors that the software is expected to perform.  Functional requirements are usually independent of the architecture, but we still must ensure that the architecture does not somehow obstruct them.

From an architectural standpoint, we are more interested in nonfunctional requirements (NFR) – general properties that the system must possess.  The architecture can enable the software to have some of these qualities while hindering others.

 Nonfunctional requirements are also known by other names.  They are often called quality attributes.  Some call them by their nickname:  “the ilities”.  This nickname comes from the fact that many of them end with “-ility” – portability, modifiability, extensibility, and so on.

Most NFRs fall into these broad categories:

  • Functionality – the software is sufficiently secure, contains reusable components, and can be ported to other computing platforms if necessary.
  • Usability – the software is reasonably easy to learn and use efficiently.
  • Reliability – the software is available when users need it, rarely breaks down, and is capable of recovering from problems.
  • Performance – the software runs reasonably quickly, uses system resources economically, and can be scaled to support more data, interfaces, or users.
  • Supportability – the software can be modified, extended, or repaired without excessive difficulty.


Business Factors

In addition to technical requirements, the architecture must also support business concerns, like development costs, maintenance costs, marketability, and so on.  Some nonfunctional requirements span both technical and business concerns.  For example, supportability implies specialized, technical work being performed, but the longer it takes to do so, the more it costs.  Therefore, it is both a technical factor and a business factor.


Stakeholder Input

The ultimate driver for any kind of design decision is the set of needs that the stakeholders have.  If the resulting system does not serve the needs of its stakeholders, then it is a failed product.

A Summary of Common Architectural Styles

Software architectures and building architectures have something meaningful in common:  They both have defined styles that provide certain clues to their layouts and features.

Here is a list of some common architectural styles.

  • Data-Centered Architectures for systems whose primary function involves accessing or manipulating a central data store.  Good for database-centric applications.
  • Dataflow Architectures for systems that perform a complex series of data processing steps between input and output.  Good for systems that perform a long series of validations, calculations, and/or transformations.
  • Call-and-Return Architectures for systems whose functionality can be divided into discreet, mostly independent tasks.  Good for systems that need to be easy to modify or scale.
  • Tiered Architectures for systems whose major elements may reside on separate machines in a distributed environment.  Good for systems that need to communicate remotely across a network.
  • Object-Oriented Architectures for systems that can be logically separated into distinct components that communicate through messages.  Good for systems that can take advantage of an existing code base by reusing or extending it.
  • Layered Architectures for systems whose elements can be arranged in a hierarchical fashion based on their level of abstraction or generalization.  Good for systems that require portability or those that need to encapsulate a particularly complex subsystem.


Selecting an Architectural Style

Here is where things can get tricky – many software applications could be built upon more than one of these architectures.  So how do we go about selecting an architecture when there are so many viable possibilities?

If we are fortunate enough to have documented architectures from previous projects, we have two models to help us out.  A guidance model is a document that summarizes the experience of using a particular architecture, along with an explanation of what problems were encountered and what decisions were made.  When considering multiple architectures, their guidance models can be used to create a decision model which summarizes the relevant points from the guidance models and provides a side-by-side comparison of them.

If we do not have such information available to us, then we can create our own analysis model to gain an understanding of what qualities (NFRs) we will need the architecture to support for us.  An analysis model is a representation of the system, usually from the user’s point of view, that shows us what the users will expect from the system and what constraints and limitations may exist.

Once we have an analysis model, we can apply a technique called the Architectural Tradeoff Analysis Method (ATAM).  The ATAM gives us a way to predict whether an architecture will provide the desired quality attributes.

The Architectural Tradeoff Analysis Method (ATAM)

Here is a brief summary of how the ATAM works:

First, use analysis models to understand what the stakeholders expect in terms of nonfunctional requirements.  It may require multiple models to make sure that all stakeholder concerns have been addressed.

Then select a few candidate architectural styles and diagram them in a way that specifically shows how they might support the NFRs.  For example, if security is a top priority, then diagram the architecture to highlight how it might allow security to be implemented.  This type of diagram is called a view, and the same architecture can be represented in multiple views.  (In fact, that is a best practice.  We will discuss the 4+1 view model below.)

Experiment with each view by making a change to it, and see how that change affects the quality attribute.  Make this same change with all of the views and see how other qualities might be affected.  If a small change has a large, negative impact on the quality, then we have identified a sensitivity point which becomes a major factor in the evaluation of the architecture.

Once sensitivity points have been determined, we have a somewhat reliable picture of which architecture will be most likely to support the nonfunctional requirements.

The ATAM is likely to be repeated a few times as new details emerge.

Documenting Software Architecture

One of the most vital signs of a quality architecture is how well it is documented.  Most software architects advise that every architecture should have at least two views:

  • A static view that shows the overall structure of the architecture.
  • A dynamic view that shows how the elements behave and communicate.

Others argue that two views is insufficient to communicate all stakeholder concerns.  A much more thorough approach is preferred for most large projects.  The 4+1 view model is considered to be the best overall way to document software architectures for complex systems.  It consists of five views, each of which represents the architecture from a different perspective.

  1. The logical view focuses on functionality.  It is usually represented in UML class diagrams or sequence diagrams.
  2. The development view focuses on the components of the software that must be built by the programmers.  It is usually represented by UML component diagrams.
  3. The process view focuses on dynamic aspects of the system and shows how the elements behave and communicate.  It is often represented by UML activity diagrams.
  4. The physical view focuses on how the software will be physically arranged once it is deployed to the users.  A common way to represent this is with UML deployment diagrams.
  5. The scenario view (a.k.a. use case view) describes sequences of activities interactions between elements of the system.  There is no standard diagram for communicating this information.  It must be done in a way that the users can understand, which may or may not involve technical diagrams.


FYI:  The reason we do not just call it the “5 view model” is because of the nature of the scenario view.  Rather than being a distinctly different view from the other four, it is usually a combination of the others to constitute the user’s view of the system.

Summary

The diversity of software architectures makes it difficult to define precise processes for specifying them or defining precise criteria for assessing their quality.  We do have some processes that have been incredibly effective, such as the ATAM, and we have documentation strategies to help us document and communicate the important details, such as the 4+1 view model.  As software engineering continues to mature as a discipline, activities such as architectural design are becoming more effective and refined.


Friday, June 21, 2013

My CSDA (Certified Software Development Associate) Experience

Software engineering has been a passion of mine since before I knew there were words for it.  Last year I learned that the prestigious IEEE offers certifications in this field, and my ears perked up.

Planning My Attack

I investigated the two different offerings:  CSDA (Certified Software Development Associate) for entry-level developers, and CSDP (Certified Software Development Professional) for experienced ones.  My first impulse was to shoot straight for CSDP.  After all, I have several years of software development experience and recently completed a Masters degree in software engineering in which I was less than one tenth of a grade point away from graduating with distinction.  By IEEE's recommendations, I should be a candidate for CSDP.

However, I decided to go for CSDA first, even though my experience and education should drastically over-qualify me.  My reasons:

  • I teach an undergraduate software engineering course.  If there is any hope of preparing my students for certification, it would be at the CSDA level.  I would need to assess the content firsthand in order to accurately convey the experience and expectations to them.
  • I really want to go for CSDP, and CSDA might offer a somewhat gentler preview of what the exam might be like.
  • Unlike CSDP, the CSDA doesn't expire, so I won't be trying to maintain two similar certifications after I get my CSDP in the future.
  • I have lofty aspirations of someday becoming a presenter for the IEEE Computer Society.  I want both certifications so I'll know how they differ and can accurately answer questions about them.
  • The professional development budget that my employer provides will cover the costs of both certifications, so why not?

My Preparation Process

I started by attending the IEEE Metro Area Workshop session on Software Engineering Essentials.  Then I purchased six months of access to the relevant courses in the IEEE e-learning library.  Immediately I discovered that I was right to go for CSDA first.  Some of the review questions were extremely difficult, even with my education and experience.  Tiny nuances that seem trivial in practice can have the  power to completely shift the focus in a theoretical/hypothetical context.  This was perhaps the greatest lesson I learned through this process.

At the end of my six-month subscription, I scheduled the exam for the first available time slot that fit into my busy schedule.  It was over a month away from that point, so I had some time to keep studying.  I carried my favorite software engineering book (Software Engineering: A Practitioner's Approach by Dr. Roger S. Pressman) with me everywhere I went so I could browse through it every chance I got.  I read the entire SWEBOK guide online, and even read the new drafts as they became available.  I also skimmed relevant message boards to see if anyone was talking about these exams, but very few people were.

Exam Day Arrives!

Finally the scheduled exam day arrived.  I surveyed the battlefield the day before by driving by the testing center to make sure I knew exactly where it was.  That night, I armed for battle by reviewing all of the SWEBOK knowledge areas as well as the engineering, math, and computing foundations.  Then I tried unsuccessfully to get a restful sleep so my mind would be sharp the next day.

Two things I love: Java and Mountain Dew
That morning (actually this morning), I awoke long before my alarm sounded.  I put on my Java polo shirt as a motivational reminder of the last certification I conquered.  After a delicious, energizing breakfast of peanut butter and grape jam, I popped some vitamins, downed a Dew, and headed to the testing center.

The check-in procedure was quick and simple.  I just had to sign a few things and empty my pockets into a secure storage locker.  You are not allowed to carry anything into the testing room that could be used to record or copy the questions, or to cheat by looking things up.  I carried only my driver's license and the key to the storage locker.  I was allowed to borrow a marker and a wet-erase sheet which I only ended up using on one question -- to diagram a cyclomatic complexity problem.

Most of the questions were surprisingly straightforward.  I was very glad that I had taken the prep course because some of them involved those tiny details that seem insignificant unless you really understand the intentions of software engineering processes, methods, and techniques.  There were a few on which I had to venture an educated guess based on various assumptions.  But there were two that really got under my skin.

Two Irritating Questions

I had to agree with an NDA that prevents me from repeating the questions here, so I'll summarize in a nondescript way the two questions that angered me.  One question showed a class diagram depicting an inheritance hierarchy using a mix of concrete and abstract classes, then asked which of four statements was true for the diagram.  However, all four options were technically false.  I was able to make a reasonable assumption by granting some semantic leniency, but doing so led to two valid answers based on that rationale.

The other objectionable question was even worse than that.  It was a Boolean logic notation question that gave a compound statement, assigned letters A, B, and C to its three elements, and then asked which expression was equivalent to the original statement.  The question was very simple and I could have answered it easily were it not multiple choice.  The problem is that answer A was identical to answer B and it was wrong.  Answer C was identical to answer D, and again, wrong.  So there were no right answers, and the answer choices were all duplicates of each other.

Victory!

The score report printed at the front reception desk.  Aside from my name, address, and other identifying information, it simply said this:
We are pleased to inform you that you have achieved a scaled score of 170 or higher and have thus passed the CSDA examination.
It would have been nice to know how I performed in the various knowledge areas, particularly on those with faulty questions, but overall I'm just relieved to have this challenge behind me.

On to the next!

Sunday, December 2, 2012

Considering an Architecture for the New Engine

In my last post, I identified the top priority quality attributes for my new game engine. In a nutshell, they are:
  1. Portability to allow testing of console and mobile games in a PC environment, plus the ability to offer games to a broader market. 
  2. Performance to utilize resources economically, allocating them to the game code as much as possible. 
  3. Extensibility to provide simple hooks and interfaces for connecting game code and other custom components without excessive coupling.
After doing some reviewing, pondering, and literally sleeping on it, I think I have a pretty solid idea.  Here I will attempt to describe it and how I arrived at it.

First, the top priority is portability, which implies to me that we must follow the old object-oriented design principle of separating the parts that vary (those relating to the platforms) from the parts that stay the same (the core of the engine).  The part that varies here also represents a dependency of sorts because it provides the link between the application code and the operating system.  So if we step back and look at this from a distance, we see that layers become visible -- the hardware supports the OS supports the platform layer supports the engine core supports the game code.  A layered architectural structure emerges on its own, very naturally.


A fortunate side effect of this layered architecture is that it produces a modular structure, which goes far to support the goal of extensibility.  If I were to refine this (which, of course, I will), the game engine itself would also be broken into layers, from lower-level interfaces with third-party libraries, all the way to up to higher-level services such as artificial intelligence support and custom UI components.  I will document these things in greater detail as the project progresses.

The only high priority attribute remaining is performance, which, although not expressly encouraged by the layered approach, it isn't necessarily hindered either.  Within the engine's layers, there will be countless decisions to be made that will affect performance.  The architectural structures used in various parts of MHFramework (particularly for shared data and object caching) are likely to survive into this engine as well, with an attempt at compromise between high performance and loose coupling.  Easier said than done, I know.

TL;DR:  A layered architectural structure will directly support at least two of the three driving qualities for my new engine.