Universal Design Principles (karthik)

Universal Design: Process, Principles, and Applications


A goal and a process that can be applied to the design of any product or environment

When UD principles are applied, products and environments meet the needs of potential users with a wide variety of characteristics. Disability is just one of many characteristics that an individual might possess. For example, one person could be Hispanic, six feet tall, male, thirty years old, an excellent reader, primarily a visual learner, and deaf. All of these characteristics, including his deafness, should be considered when developing a product or environment he, as well as individuals with many other characteristics, might use.

UD can be applied to any product or environment. For example, a typical service counter in a place of business is not accessible to everyone, including those of short stature, those who use wheelchairs, and those who cannot stand for extended periods of time. Applying UD principles might result in the design of a counter that has multiple heights—the standard height designed for individuals within the average range of height and who use the counter while standing up and a shorter height for those who are shorter than average, use a wheelchair for mobility, or prefer to interact with service staff from a seated position.

Making a product or an environment accessible to people with disabilities often benefits others. For example, automatic door openers benefit individuals using walkers and wheelchairs, but also benefit people carrying groceries and holding babies, as well as elderly citizens. Sidewalk curb cuts, designed to make sidewalks and streets accessible to those using wheelchairs, are often used by kids on skateboards, parents with baby strollers, and delivery staff with carts. When television displays in airports and restaurants are captioned, programming is accessible not only to people who are deaf but also to others who cannot hear the audio in noisy areas.

UD is a goal that puts a high value on both diversity and inclusiveness. It is also a process. The following paragraphs summarize process, principles, and applications of UD.

The Process of Universal Design

The process of UD requires a macro view of the application being considered as well as a micro view of subparts of the application. UD can be applied to a variety of applications. The following list suggests a process that can be used to apply UD:

  1. Identify the application. Specify the product or environment to which you wish to apply universal design.
  2. Define the universe. Describe the overall population (e.g., users of service), and then describe the diverse characteristics of potential members of the population for which the application is designed (e.g., students, faculty, and staff with diverse characteristics with respect to gender; age; size; ethnicity and race; native language; learning style; and abilities to see, hear, manipulate objects, read, and communicate).
  3. Involve consumers. Consider and involve people with diverse characteristics (as identified in Step 2) in all phases of the development, implementation, and evaluation of the application. Also gain perspectives through diversity programs, such as the campus disability services office.
  4. Adopt guidelines or standards. Create or select existing universal design guidelines/standards. Integrate them with other best practices within the field of the specific application.
  5. Apply guidelines or standards. Apply universal design in concert with best practices within the field, as identified in Step 4, to the overall design of the application, all subcomponents of the application, and all ongoing operations (e.g., procurement processes, staff training) to maximize the benefit of the application to individuals with the wide variety of characteristics identified in Step 2.
  6. Plan for accommodations. Develop processes to address accommodation requests (e.g., purchase of assistive technology, arrangement for sign language interpreters) from individuals for whom the design of the application does not automatically provide access.
  7. Train and support. Tailor and deliver ongoing training and support to stakeholders (e.g., instructors, computer support staff, procurement officers, volunteers). Share institutional goals with respect to diversity and inclusion and practices for ensuring welcoming, accessible, and inclusive experiences for everyone.
  8. Evaluate. Include universal design measures in periodic evaluations of the application, evaluate the application with a diverse group of users, and make modifications based on feedback. Provide ways to collect input from users (e.g., through online and printed instruments and communications with staff).

Universal Design Principles

At the Center for Universal Design (CUD) at North Carolina State University, a group of architects, product designers, engineers, and environmental design researchers established seven principles of UD to provide guidance in the design of products and environments. Following are the CUD principles of UD, each followed with an example of its application:

  1. Equitable use. The design is useful and marketable to people with diverse abilities. For example, a website that is designed to be accessible to everyone, including people who are blind, employs this principle.
  2. Flexibility in Use. The design accommodates a wide range of individual preferences and abilities. An example is a museum that allows visitors to choose to read or listen to the description of the contents of a display case.
  3. Simple and intuitive. Use of the design is easy to understand, regardless of the user’s experience, knowledge, language skills, or current concentration level. Science lab equipment with clear and intuitive control buttons is an example of an application of this principle.
  4. Perceptible information. The design communicates necessary information effectively to the user, regardless of ambient conditions or the user’s sensory abilities. An example of this principle is captioned television programming projected in noisy restaurants.
  5. Tolerance for error. The design minimizes hazards and the adverse consequences of accidental or unintended actions. An example of a product applying this principle is software applications that provide guidance when the user makes an inappropriate selection.
  6. Low physical effort. The design can be used efficiently, comfortably, and with a minimum of fatigue. Doors that open automatically for people with a wide variety of physical characteristics demonstrate the application of this principle.
  7. Size and space for approach and use. Appropriate size and space is provided for approach, reach, manipulation, and use regardless of the user’s body size, posture, or mobility. A flexible work area designed for use by employees with a variety of physical characteristics and abilities is an example of applying this principle.

Applications of Universal Design

UD can be applied to any product or environment, such as curriculum, instruction, career services offices, multimedia, tutoring and learning centers, conference exhibits, museums, microwave ovens, computer labs, worksites, and web pages. DO-IT (Disabilities, Opportunities, Internetworking, and Technology) produces publications and video presentations that promote UD in a variety of environments.

It is a great honour to be invited to write about the Majority World in Universal Design Newsletter. The intention of both the publisher and the contributor is to broaden design practitioners’ awareness of and response to the needs and requirements of Majority World citizens, particularly regarding the question of how UD Principles can be applied to world citizens who live in poverty.

While the Intermediate or Appropriate Technology movement (with its emphasis on small-scale, locally-controlled, labor-intensive, energy-efficient, and environmentally sound design) has been influential in the Majority World, Universal Design has had very little impact in this context. It is my hope that initiating an on-going dialogue on UniversalDesign.com can help us find out why this has been the case. After all, if Universal Design means anything, it has to have som degree of universal applicability.

Effects of Distributed Development on Software Quality

The Short note on the Does Distributed Development Affect Software Quality?

An Empirical Case Study of Windows Vista. Is as follows.

In there study we divide binaries based on the level of geographic dispersion of their commits. We studied the post release failures for the Windows Vista code base and concluded that distributed development has little to no effect. We posit that this negative result is a significant finding asit refutes, at least in the context of Vista development, conventional wisdom and widely held beliefs about distributed development. When coupled with prior work, our results support the conclusion that there are scenarios in which distributed development can work for large software projects. Based on earlier work our study shows that Organizational differences are much stronger indicators of quality than geography. An Organizational compact but geographically distributed project might be better than an geographically close organizationally distributed project. We have presented a number of observations about the development practices at Microsoft which may mitigate some of the hurdles associated with distributed development, but no causal link has been established. There is a strong similarity between these practices and those that have worked for other teams in the past as well as solutions proposed in other work. Directly examining the effects of these practices is an important direction for continued research in globally distributed software development.

 

In this paper, we have presented a case study on the communication of distributed projects. The study shows that there is no significant difference in the amount of communication of two-location projects and three-location projects. The results show a trend that the amount of communication in two location projects is higher than the amount of communication in three-location projects. The study also analyzes the effect of time zone differences classifying the projects in three time ranges: large, medium, and small. The result also shows no significant difference in the communication of projects in these time zone ranges; however, the data shows a trend towards more communication in the small time zone range. We also analyzed the reply time for e-mails in projects located in different time zones. We found that in projects located in the small time zone range, the reply time for emails was faster than in projects located in the large time zone range. Our results indicate a trend on the effect of communication, however, the analysis reveal that the differences are not significant. As future work, we plan to collect more data in the next editions of DOSE in order to have more reliable data. We also plan to extend the study and compare the distributed projects with projects developed in a single location. Additionally, we want to study the quality of the produced software and compare the number of failures in local projects to two-location projects and three-location projects. For future studies, we will keep the classification by time zone ranges. Acknowledgments: We would like to thank all the people involved in DOSE: Do Le Minh, Franco Brusatti, Giordano Tamburrelli, Huynh Quyet Thang, Lajos Kollar, Mei Tang, Natalia Komlevaja, Nazareno Aguirre, Peter Kolb, Raffaela Mirandola, Sergey Karpenko, Sungwon Kang, Victor Krisilov, Viktor Gergel; and the students who took the course

INFORMAL COLLABERATION TOOLS FOR GSD (BILAL AHMED)

Informal collaboration tools for global software development teamsL  (BILAL )

 

Software development teams that include engineers around the globe are adopting informal collaboration tools to overcome communication and cultural barriers and build trust and comfort among members.

A global team differs from a co-located team. Co-located teams will instinctively initiate a process of informal collaboration. This is natural human behavior that gradually leads to increasing trust and mutual understanding. When team members have established a trust relationship, they will ask each other questions and proactively provide information. They will also estimate each other’s abilities better, which leads to more effective division of work.

Figure 1. The roles of various tools in the collaboration process

Informal collaboration tools within collaboration platforms follow the models of these public services. The tech-savvy user group in a software development team is already familiar with such systems and knows how to get the best from them. This is making it easier to introduce similar tools without a high degree of user resistance.

Classifications of tools

Figure 8. Classification of relevant tools and information

Combining tools

A global development team can best use a combination of software engineering tools such as IBM Rational Team Concert and generic informal collaboration tools such as IBM® Lotus Live®.

In some cases the tools can be seamlessly combined, such as the integration between IBM Rational Team Concert and IBM Lotus Same time for presence and chat.

Many studies have concluded that there are significant barriers to communication in global teams. In addition to cultural and language barriers, there are the three other main areas that are problematic:

  • awareness
  • medium
  • synchronicity

Each of these areas can be addressed by using various informal collaboration tools. The tools are of particular value to global software development teams because they can lower the barriers that such teams encounter.

  • Awareness: Geographical separation leads to lack of awareness of other team members’ skills and expertise or even about their existence in a large project.

    The need for awareness can be addressed by using the following tools:

    • Blogs
    • Social bookmarking
    • Collaborative virtual environments
    • Instant messaging
    • Podcasts
    • Profiles of team members
    • Presence
    • Threaded discussions online
    • VoIP
    • Wikis
  • Medium: It is not possible to communicate face-to-face or to use visual communication (for example, writing on whiteboards).

    The problem of communication medium can be addressed using these tools:

    • Remote desktop viewers
    • Telephone calls
    • Shared whiteboards
    • Videoconferences
    • Collaborative virtual environments
  • Synchronicity: Time zone differences and different patterns of working (national holidays, for instance) lead to limited times when team members can communicate synchronously.

    The problem of synchronicity can be addressed using the following tools:

    • Blogs
    • Social bookmarking
    • E-mail
    • Collaborative virtual environments
    • Newsfeeds (RSS)
    • Podcasts
    • Threaded discussions online
    • Wikis

 

SOFTWARE METRICS AND MEASUREMENT (BILAL AHMED)

Software metrics and measurements (BILAL AHMED)

Measurements and Metrics A measurement is an indication of the size, quantity, amount or dimension of a particular attribute of a product or process. For example the number of errors in a system is a measurement.

A Metric is a measurement of the degree that any attribute belongs to a system, product or process. For example the number of errors per person hours would be a metric.

Thus, software measurement gives rise to software metrics.

Metrics are related to the four functions of management:

  • Planning
  • Organizing
  • Controlling
  • Improving

Metric Classification Software metrics can be divided into two categories; product metrics and process metrics.

Product metrics are used to asses the state of the product, tracking risks and discovering potential problem areas. The team’s ability to control quality is assessed.

Process metrics focus on improving the long term process of the team or organisation.

Goal Question Metric(GQM)

The GQM, developed by Dr. Victor Bossily defines a top-down, goal oriented framework for software metrics. It approaches software measurement using a three level model; conceptual, operational, and quantitative. At the conceptual level, goals are set prior to metrics collection. According to the GQM organizational goals are understood to shape project goals. Organizational goals may be set by upper management or by organization stakeholders. To establish Project goals Brainstorming sessions by the project team may be used. At the operational level, for each goal question(s) are established which when answered will indicate if the goal has been achieved. Finally, at the quantitative level, each question has a set of data associated with it which allow it to be answered in a quantitative way. File:Gqm.jpg

The GQM is described as a seven step process. (Some authors give a different number of steps, leaving out step 7 or merging two steps into one). The first three steps are crucial and correspond to the main activities at each level in the model as described above.

GQM Steps

Step 1. Develop a set of Goals

Develop goals on corporate, division, or project level. These goals can be established from brainstorming sessions involving project team members, or they may be set by organisational goals or from stakeholder’s requirements.

Step 2. Develop a set of questions that characterise the goals.

From each goal a set of questions is derived which will determine if each goal is being met.

Step 3. Specify the Metrics needed to answer the questions.

From each question from step two it is determined what needs to be measured to answer each question adequately.

Step 4. Develop Mechanisms for data Collection and Analysis

It must be determined:

  • Who will collect the data?
  • When will the data be collected?
  • How can accuracy and efficiency be ensured?
  • Who will be the audience?

Step 5. Collect Validate and Analyse the Data.

The data may be collected manually or automatically. Metrics data can be portrayed graphically to enhance understanding.

Step 6. Analyse in a Post Mortem Fashion

Data gathered is analysed and examined to determine its conformity to the goals. Based on the findings here recommendations are made for future improvements.

Step 7. Provide Feedback to Stakeholders

The last step, providing feedback to the stakeholders is a crucial step in the measurement process. It is essentially the purpose behind the previous six steps. Feedback is often presented in the form of one goal per page with the questions, metrics and graphed results.

Metrics in Project Estimation

Software metrics are a way of putting a value/measure on certain aspects of development allowing it to be compared to other projects. These values have to be assessed correctly otherwise they will not give accurate measurements and can lead to false estimations, etc.

Metrics are used to maintain control over the software development process. It allows managers to manage resources more efficiently so the overall team would be more productive. Some examples of metrics include Size Projections like Source Byte Size, Source Lines of Code, Function pointers, GUI Features and other examples are Productivity Projections such as Productivity Metrics.

The metrics can be used to measure size, cost, effort, product quality of a project as well as putting a value on the quality of the process taken and personal productivity. There are certain factors that have to be taken into account when comparing different projects with each other using metrics. If one project has was written in a different language then the number of lines of code could be significantly different or perhaps the larger project has many more errors and bugs in it. Measurements such as Function Pointers give a better indication because they are the actual methods that are in the project rather than the amount of lines.

Using metrics companies can make much better estimations on the cost, length, effort, etc of a project which leads to them giving a more accurate view of the overall project. This better view will allow the company to bid for projects more successfully, make projects more likely to succeed and will greatly reduce the risk of late delivery, failure, being penalized for late delivery, bankruptcy, etc.

The processes that manage the project and its code can have issues such as build failures, patches needed, etc that can affect the software metric’s measures. By using ISO 9000 this can help alleviate these issues.

For smaller companies where customer retention is important, using metrics to better the software development process and improve on overall project quality, delivery time, etc will make the customers happy which may lead to continued business.

 

Size Projections

Source Byte Size

Source Byte Size is a measure of the actual size of the source code data (e.g. in Kilobytes). It measures the file size vs. packages, classes, methods, etc.

The overall byte size of the project can be estimated which would give a better indication of the type/size of hardware needed to use the software. This becomes an issue on systems/devices where they are of a limited nature (e.g. watch, washing machine, intelligent fridge, etc)

The byte size of the source code would vary greatly depending on which programming language was used to develop it. For example a program developed in Java could be much smaller than the same program coded in COBOL.

Productivity Projections

Software Engineering Productivity      

Projecting productivity is all about measurements. In general these measurements involve the rate at which a software engineer produces software and the accompanied documentation. While quality is also an important aspect of the produced software the measurement is not quality oriented. In essence the goal is to measure the useful functionality produces in a unit of time.

Productivity metrics

Metrics on productivity come in two main categories.

  1. Size of product based on some output from the software process. i.e.. lines of delivered source code, object code instructions, etc.
  2. Function-related measurements based on the functionality of the deliverables. This is expressed in so called “Function-points”.

Function points

Function points are determined on the basis of a combination of programmer characteristics such as external inputs and outputs, user interactions, external interfaces and files used by the system followed by associating a weight with each of these characteristics. Finally a “function point count” is computed by multiplying each raw count by the weight and summing all values.

Challenges in Software Metrics

The biggest challenge that lie in measures is mainly the estimation process of the size, the total elapsed time assigned to programming. In addition it can be difficult to define elements such as “a line of code”, “Programs to determine part of the system”.

If we were to compare executable byte size of programs written in different languages we would see that there would be a difference too. The compiler compiles source code into byte code so different compilers would give out different executable file sizes.

As more development is done and the project increases, the overall byte size of the project increases. This will give estimations on how much space is needed to store, backup, etc the source files and also the size of transfers over the network. Although this is not much of a problem these days with the cost of storage, transfer bandwidth, etc being so cheap, it is a metric that can be used for this type of estimation (storage).

As the byte size builds up, searching and indexing will take slightly longer every time it increases.

 

REVERSE ENGINEERING TOOLS AND CONCEPT (BILAL AHMED)

Reverse engineering tools and concept (BILAL AHMED)

Reverse engineering is the process of discovering the technological principles of a device, object, or system through analysis of its structure, function, and operation. It often involves taking something (a mechanical device, electronic component, computer program, or biological, chemical, or organic matter) apart and analyzing its workings in detail to be used in maintenance, or to try to make a new device or program that does the same thing without using or simply duplicating (without understanding) the original.

Reverse engineering has its origins in the analysis of hardware for commercial or military advantage. The purpose is to deduce design decisions from end products with little or no additional knowledge about the procedures involved in the original production. The same techniques are subsequently being researched for application to legacy software systems, not for industrial or defense ends, but rather to replace incorrect, incomplete, or otherwise unavailable documentation.]

Reverse Engineering Tools and Concepts

Reverse engineering fuels entire technical industries and paves the way for competition. Reverse engineers tasked with unraveling the mysteries of new products released by competitors. The boom in the 1980s of the PC clone market was heavily driven by the ability to reverse engineer the IBM PC BIOS software. The same tricks have been applied in the set-top game console industry (which includes the Sony PlayStation, for example). Chip work on hard problems like integrating software with proprietary protocols and code. They also are often manufacturers Cyrix and AMD have reverse engineered the Intel microprocessor to release compatible chips. From a legal perspective, reverse engineering work is dangerous because it skirts the edges of the law. New laws such as the DMCA and UCITA (which many security analysts decry as egregious), put heavy restrictions on reverse engineering. If you are tasked with reverse engineering software legally, you need to understand these laws. We are not going to dwell on the legal aspects of reverse engineering because we are not legal experts. Suffice it to say that it is very important to seek legal counsel on these matters, especially if you represent a company that cares about its intellectual property.

The Debugger

A debugger is a software program that attaches to and controls other software programs. A debugger allows single stepping of code, debug tracing, setting breakpoints, and viewing variables and memory state in the target program as it executes in a stepwise fashion. Debuggers are invaluable in determining logical program flow. Debuggers fall into two categories: user-mode and kernel-mode debuggers. User-mode debuggers run like normal programs under the OS and are subject to the same rules as normal programs. Thus, user-mode debuggers can only debug other user-level processes. A kernel-mode debugger is part of the OS and can debug device drivers and even the OS itself. One of the most popular commercial kernel-mode debuggers is called SoftIce and it is published by Compuware).

Fault Injection Tools

Tools that can supply malformed or improperly formatted input to a target software process to cause failures are one class of fault injection tool. Program failures can be analyzed to determine whether errors exist in the target software. Some failures have security implications, such as failures that allow an attacker direct access to the host computer or network. Fault injection tools fall into two categories: host and network. Host-based fault injectors operate like debuggers and can attach to a process and alter program states. Network-based fault injectors manipulate network traffic to determine the effect on the receiver.

Although classic approaches to fault injection often make use of source code instrumentation [Voas and McGraw, 1999], some modern fault injectors pay more attention to tweaking program input. Of particular interest to security practitioners are Hailstorm (Cenzic), the Failure Simulation Tool or FST (Cigital), and Holodeck (Florida Tech). James Whittaker’s approach to fault injection for testing (and breaking) software is explained in two books [Whittaker, 2002; Whittaker and Thompson, 2003].

The Disassembler

A disassembler is a tool that converts machine-readable code into assembly language. Assembly language is a human-readable form of machine code (well, more human readable than a string of bits anyway). Dissemblers reveal which machine instructions are being used in the code. Machine code is usually specific to a given hardware architecture (such as the PowerPC chip or Intel Pentium chip). Thus, disassemblers are written expressly for the target hardware architecture.

The Reverse Compiler or Decompiler

A decompiler is a tool that converts assembly code or machine code into source code in a higher level language such as C. Decompilers also exist to transform intermediate languages such as Java byte code and Microsoft Common Runtime Language (CRL) into source code such as Java. These tools are extremely helpful in determining higher level logic such as loops, switches, and if-then statements. Decompilers are much like disassemblers but take the process one (important) step further. A good disassembler/compiler pair can be used to compile its own collective output back into the same binary

 

COMPONENT BASED ENGINEERING :(BILAL AHMED)

Component based EngineeringL(BILAL AHMED)

Component-based software engineering (CBSE) (also known as component-based development (CBD)) is a branch of software engineering that emphasizes the separation of concerns in respect of the wide-ranging functionality available throughout a given software system. It is a reuse-based approach to defining, implementing and composing loosely coupled independent components into systems. This practice aims to bring about an equally wide-ranging degree of benefits in both the short-term and the long-term for the software itself and for organizations that sponsor such software.

Software engineers regard components as part of the starting platform for service-orientation. Components play this role, for example, in Web services, and more recently, in service-oriented architectures (SOA), whereby a component is converted by the Web service into a service and subsequently inherits further characteristics beyond that of an ordinary component.

Components can produce or consume events and can be used for event driven architectures (EDA).

Definition and characteristics of components

An individual software component is a software package, a Web service, or a module that encapsulates a set of related functions (or data).

All system processes are placed into separate components so that all of the data and functions inside each component are semantically related (just as with the contents of classes). Because of this principle, it is often said that components are modular and cohesive.

With regard to system-wide co-ordination, components communicate with each other via interfaces. When a component offers services to the rest of the system, it adopts a provided interface that specifies the services that other components can utilize, and how they can do so. This interface can be seen as a signature of the component – the client does not need to know about the inner workings of the component (implementation) in order to make use of it. This principle results in components referred to as encapsulated. The UML illustrations within this article represent provided interfaces by a lollipop-symbol attached to the outer edge of the component.

However, when a component needs to use another component in order to function, it adopts a used interface that specifies the services that it needs. In the UML illustrations in this article, used interfaces are represented by an open socket symbol attached to the outer edge of the component.

 

 

A simple example of several software components – pictured within a hypothetical holiday-reservation system represented in UML 2.0.

Another important attribute of components is that they are substitutable, so that a component can replace another (at design time or run-time), if the successor component meets the requirements of the initial component (expressed via the interfaces). Consequently, components can be replaced with either an updated version or an alternative without breaking the system in which the component operates.

As a general rule of thumb for engineers substituting components, component B can immediately replace component A, if component B provides at least what component A provided and uses no more than what component A used.

Software components often take the form of objects (not classes) or collections of objects (from object-oriented programming), in some binary or textual form, adhering to some interface description language (IDL) so that the component may exist autonomously from other components in a computer.

When a component is to be accessed or shared across execution contexts or network links, techniques such as serialization or marshalling are often employed to deliver the component to its destination.

Reusability is an important characteristic of a high-quality software component. Programmers should design and implement software components in such a way that many different programs can reuse them. Furthermore, component-based usability testing should be considered when software components directly interact with users.

It takes significant effort and awareness to write a software component that is effectively reusable. The component needs to be:

  • fully documented
  • thoroughly tested
    • robust – with comprehensive input-validity checking
    • able to pass back appropriate error messages or return codes
  • designed with an awareness that it will be put to unforeseen uses

In the 1960s, programmers built scientific subroutine libraries that were reusable in a broad array of engineering and scientific applications. Though these subroutine libraries reused well-defined algorithms in an effective manner, they had a limited domain of application. Commercial sites routinely created application programs from reusable modules written in Assembler, COBOL,and other second and third-generation languages using both system and user application libraries.

As of 2010[update], modern reusable components encapsulate both data structures and the algorithms that are applied to the data structures. Itbuilds on prior theories of software objects, software architectures, software frameworks and software design patterns, and the extensive theory of object-oriented programming and the object oriented design of all these. It claims that software components, like the idea of hardware components, used for example in telecommunications, can ultimately be made interchangeable and reliable. On the other hand, it is argued that it is a mistake to focus on independent components rather than the framework (without which they would not exist).

Differences from object-oriented programming

Proponents of object-oriented programming (OOP) maintain that software should be written according to a mental model of the actual or imagined objects it represents. OOP and the related disciplines of object-oriented analysis and object-oriented design focus on modeling real-world interactions and attempting to create “nouns” and “verbs” that can be used in more human-readable ways, ideally by end users as well as by programmers coding for those end users.

Component-based software engineering, by contrast, makes no such assumptions, and instead states that developers should construct software by gluing together prefabricated components – much like in the fields of electronics or mechanics. Some peers[who?] will even talk of modularizing systems as software components as a new programming paradigm.

Some argue that earlier computer scientists made this distinction, with Donald Knuth’s theory of “literate programming” optimistically assuming there was convergence between intuitive and formal models, and Edsger Dijkstra‘s theory in the article The Cruelty of Really Teaching Computer Science, which stated that programming was simply, and only, a branch of mathematics.

In both forms, this notion has led to many academic debate s about the pros and cons of the two approaches and possible strategies for uniting the two. Some consider the different strategies not as competitors, but as descriptions of the same problem from different points of view.

 Architecture

A computer running several software components is often called an application server. Using this combination of application servers and software components is usually called distributed computing. The usual real-world application of this is in e.g. financial applications or business software.

 Models

A component model is a definition of standards for component implementation, documentation and deployment. Examples of component models are: EJB model (Enterprise Java Beans), COM+ model (.NET model), Cobra Component Model. The component model specifies how interfaces should be defined and the elements that should be included in an interface definition.

 Technologies

  • Business objec t technologies
  • Component-based software frameworks for specific domains
    • Earth System Modeling Framework
  • Component-oriented programming
    • Bundles as defined by the OSGi Service Platform

 

 

“Universal Design” applicable to software – (Manjunath M)

What is Universal Design?

Universal design makes things more accessible, safer, and convenient for everyone. Also called “Design for All” or “Inclusive Design,” it is a philosophy that can be applied to policy, design and other practices to make products, environments and systems function better for a wider range of people. It developed in response to the diversity of human populations, their abilities and their needs. Examples of universal design include utensils with larger handles, curb ramps, automated doors, kneeling buses with telescoping ramps, houses with no-step entries, closed captioning in televisions, and the accessibility features incorporated into computer operating systems and software.

Universal design refers to broad-spectrum ideas meant to produce buildings, products and environments that are inherently accessible to both people without disabilities and people with disabilities

The term “universal design” was coined by the architect Ronald N lace to describe the concept of designing all products and the built environment to be aesthetic and usable to the greatest extent possible by everyone, regardless of their age, ability, or status in life. However, it was the work of Selwyn Goldsmith, author of Designing for the Disabled (1963), who really pioneered the concept of free access for disabled people. His most significant achievement was the creation of the dropped curb – now a standard feature of the built environment.

Universal design emerged from slightly earlier barrier-free concepts, (Barrier-free building modification consists of modifying buildings or facilities so that they can be used by people who are disabld or have physical impairments) the broader accessibility movement and adaptive and assistive technology and also seeks to blend aesthetics into these core considerations. As life expectancy rises and modern medicine increases the survival rate of those with significant injuries, illnesses, and birth defects, there is a growing interest in universal design. There are many industries in which universal design is having strong market penetration but there are many others in which it has not yet been adapted to any great extent. Universal design is also being applied to the design of technology, instruction, services, and other products and environments.

 

 

Purpose of Universal Design

Universal Design E-World provides web based tools to support the “community of practice” in universal design. Conventional web pages can be constructed to provide permanent reference documents and links to other resources on the World Wide Web. A wiki engine allows the construction of interactive web pages and collaboration in development of documents. On line surveys can be organized using an online survey research and polling tool. The activities supported by UD E-World can be public or restricted to specific groups like the participants of a research project, a temporary work group or a special interest group in a professional organization. To avoid abuse, we are only providing access to registered users, even in the public areas, and reserve the right to bar any user who does not comply with our  Term of use

 

How process that can be applied to the design of any product or environment?

Designing any product or environment involves the consideration of many factors, including aesthetics, engineering options, environmental issues, safety concerns, industry standards, and cost. Often, designers focus on the average user. In contrast, universal design (UD), according to the Center for Universal Design, “is the design of products and environments to be usable by all people, to the greatest extent possible, without the need for adaptation or specialized design”

When UD principles are applied, products and environments meet the needs of potential users with a wide variety of characteristics. Disability is just one of many characteristics that an individual might possess. For example, one person could be Hispanic, six feet tall, male, thirty years old, an excellent reader, primarily a visual learner, and deaf. All of these characteristics, including his deafness, should be considered when developing a product or environment he, as well as individuals with many other characteristics, might use.

UD can be applied to any product or environment. For example, a typical service counter in a place of business is not accessible to everyone, including those of short stature, those who use wheelchairs, and those who cannot stand for extended periods of time. Applying UD principles might result in the design of a counter that has multiple heights—the standard height designed for individuals within the average range of height and who use the counter while standing up and a shorter height for those who are shorter than average, use a wheelchair for mobility, or prefer to interact with service staff from a seated position.

Making a product or an environment accessible to people with disabilities often benefits others. For example, automatic door openers benefit individuals using walkers and wheelchairs, but also benefit people carrying groceries and holding babies, as well as elderly citizens. Sidewalk curb cuts, designed to make sidewalks and streets accessible to those using wheelchairs, are often used by kids on skateboards, parents with baby strollers, and delivery staff with carts. When television displays in airports and restaurants are captioned, programming is accessible not only to people who are deaf but also to others who cannot hear the audio in noisy areas.

UD is a goal that puts a high value on both diversity and inclusiveness. It is also a process. The following paragraphs summarize process, principles, and applications of UD.

Universal Design Principles:

At the Center for Universal Design (CUD) at North Carolina State University, a group of architects, product designers, engineers, and environmental design researchers established seven principles of UD to provide guidance in the design of products and environments. Following are the CUD principles of UD, each followed with an example of its application:

  1. Equitable useThe design is useful and marketable to people with diverse abilities. For example, a website that is designed to be accessible to everyone, including people who are blind, employs this principle.
  2. Flexibility in UseThe design accommodates a wide range of individual preferences and abilities. An example is a museum that allows visitors to choose to read or listen to the description of the contents of a display case.
  3. Simple and intuitiveUse of the design is easy to understand, regardless of the user’s experience, knowledge, language skills, or current concentration level. Science lab equipment with clear and intuitive control buttons is an example of an application of this principle.
  4. Perceptible informationThe design communicates necessary information effectively to the user, regardless of ambient conditions or the user’s sensory abilities. An example of this principle is captioned television programming projected in noisy restaurants.
  5. Tolerance for error: The design minimizes hazards and the adverse consequences of accidental or unintended actions. An example of a product applying this principle is software applications that provide guidance when the user makes an inappropriate selection.
  6. Low physical effort: The design can be used efficiently, comfortably, and with a minimum of fatigue. Doors that open automatically for people with a wide variety of physical characteristics demonstrate the application of this principle.
  7. Size and space for approach and use: Appropriate size and space is provided for approach, reach, manipulation, and use regardless of the user’s body size, posture, or mobility. A flexible work area designed for use by employees with a variety of physical characteristics and abilities is an example of applying this principle.

the concept of multi-paradigm programming language – (Manjunath M)

Programming languages are often classified according to their paradigms, e.g. imperative, functional, logic, constraint-based, object-oriented, or aspect-oriented. A paradigm characterizes the style, concepts, and methods of the language for describing situations and processes and for solving problems, and each paradigm serves best for programming in particular application areas. Real-world problems, however, are often best implemented by a combination of concepts from different paradigms, because they comprise aspects from several realms, and this combination is more comfortably realized using multiparadigm programming languages.

In object-oriented programming, programmers can think of a program as a collection of interacting objects, while in functional programming a program can be thought of as a sequence of stateless function evaluations. When programming computers or systems with many processors, process oriented programming allows programmers to think about applications as sets of concurrent processes acting upon logically shared data structures

Some languages are designed to support one particular paradigm (Smalltalk supports object-oriented programming,Haskell supports functional programming), while other programming languages support multiple paradigms

Many programming paradigms are as well known for what techniques they forbid as for what they enable. For instance, pure functional programming disallows the use of Side effects while Structured programming disallows the use of the goto statement. Partly for this reason, new paradigms are often regarded as doctrinaire or overly rigid by those accustomed to earlier styles.

Multi-paradigm programming language

List of multi-paradigm programming languages

A multi-paradigm programming language is a programming languages that supports more than one programming paradigm[As edadesignertimothy Bodd puts it: “The idea of a multiparadigm language is to provide a framework in which programmers can work in a variety of styles, freely intermixing constructs from different paradigms.” The design goal of such languages is to allow programmers to use the best tool for a job, admitting that no one paradigm solves all problems in the easiest or most efficient way.

All these languages follow the procedural paradigm. That is, they describe, step by step, exactly the procedure that should, according to the particular programmer at least, be followed to solve a specific problem. The efficacy and efficiency of any such solution are both therefore entirely subjective and highly dependent on that programmer’s experience, inventiveness and ability.

Later, Object Oriented l;anguages (like C++, Eiffel and java) were created. In these languages, data, and methods of manipulating the data, are kept as a single unit called an object. The only way that a user can access the data is via the object’s ‘methods’ (subroutines). Because of this, the internal workings of an object may be changed without affecting any code that uses the object. The necessity of every object to have associative methods leads some skeptics to associate OOP with software bloat. polymorphism was developed as one attempt to resolve this dilemma.

Since object-oriented programming is considered a paradigm, not a language, it is possible to create even an object-oriented assembler language. High Level Assembly (HLA) is an example of this that fully supports advanced data types and object-oriented assembly language programming – despite its early origins.

Within imperative programming, which is based on procedural languages, an alternative to the computer-centered hierarchy of structured programming is literate pgming, which structures programs instead as a human-centered web, as in a hypertext essay – documentation is integral to the program, and the program is structured following the logic of prose exposition, rather than compiler convenience.

Independent of the imperative branch,Declearative programming paradigms were developed. In these languages the computer is told what the problem is, not how to solve the problem – the program is structured as a collection of properties to find in the expected result, not as a procedure to follow. Given a database or a set of rules, the computer tries to find a solution matching all the desired properties. The archetypical example of a declarative language is the Fourth generation language SQL, as well as the family of functional languages and Logic programming

Functional programming is a subset of declarative programming. Programs written using this paradigm use functions, blocks of code intended to behave like mathematical functions. Functional languages discourage changes in the value of variables through  assignment, making a great deal of use of recursion instead.

The Logic Programming paradigm views computation as automated reasoning over a corpus of knowledge. Facts about the problem domainare expressed as logic formulae, and programs are executed by applying inference rules over them until an answer to the problem is found, or the collection of formulae is proved inconsistent.

the ISO 9001 and ISO 9126 standards covering software quality – (Manjunath M)

The most challenging goal of software engineering is to find better techniques and methods for developing quality and error – resistant software at reasonable cost. In today’s world of information, computers have been applied in to a number of large and critical areas of the industry

Quality characteristics of the software can be measured with a set of attributes defined for each characteristic. These characteristics help evaluating the quality of software, but they do not define a guidance of constructing high quality software products. Quality characteristics are defined in the standard ISO/IEC 9126.

Quality management system requirements are defined in the ISO 9001 standard. The main goal of these requirements is to satisfy the customer needs, which is the measure of quality software product.

In the context of Software engineering software quality refers to two related but distinct notions that exist wherever quality is defined in a business context.

Software functional quality reflects how well it complies with or conforms to a given design, based on functional requirements or specifications. That attribute can also be described as the fitness for purpose of a piece of software or how it compares to competitors in the marketplace as a worthwhile product.

Software structural quality refers to how it meets non-functional requirements that support the delivery of the functional requirements, such as robustness or maintainability, the degree to which the software was produced correctly.

Structural quality is evaluated through the analysis of the software inner structure, its source code, in effect how its architecture adheres to sound principles of software architecture. In contrast, functional quality is typically enforced and measured through software testing.

The structure, classification and terminology of attributes and metrics applicable to softwrae quality management have been derived or extracted from the ISO 9126-3 and the subsequent ISO 25000:2005 quality model, also known as Square. Based on these models, the Consortium for IT Software Quality (CISQ ) has defined 5 major desirable structural characteristics needed for a piece of software to provide business value  Reliability, Efficiency, Security, Maintainability and (adequate) Size.

Software quality measurement quantifies to what extent a software or system rates along each of these five dimensions. An aggregated measure of software quality can be computed through a qualitative or a quantitative scoring scheme or a mix of both and then a weighting system reflecting the priorities. This view of software quality being positioned on a linear continuum is supplemented by the analysis of “critical programming errors” that under specific circumstances can lead to catastrophic outages or performance degradations that make a given system unsuitable for use regardless of rating based on aggregated measurements.

ISO STANDARDS

ISO is the International Organization for Standardization that has membership from countries all around the world. It has developed about 19000 International Standards and about 1000 new standards every year.

ISO standards published in recent years are in fields of information and societal security, climate change, energy efficiency and renewable resources, sustainable building design and operation, water services, nanotechnologies, intelligent transport systems, food safety management, and health informatics.

SOFTWARE QUALITY STANDARDS

ISO/IEC 9126 ISO/IEC 9126 is one of the best software quality standards in the world. It is intended to specify the required software product quality for software development and software evaluation.

This standard is divided into four parts:

• Quality model

• External metrics

• Internal metrics

• Quality in use metrics

This quality model can be applied in many sectors. It describes the quality model framework that explains the relationships between the different approaches to quality and it consists of six characteristics them is divided into a set of sub-characteristics:

Functionality – a set of software attributes with specific properties that provide functions that satisfy the needs of the user

 Reliability – A set of software attributes with ability to maintain its specific level of performance under the specific stated conditions for a stated period of time.

 Usability – A set of software attributes that are measure of the effort needed user to learn to use the product.

Efficiency – A set of software attributes that represents the ability of the software product to provide relationship between level of performance of the software and the amount of recourses that are used under the stated conditions.

Maintainability – A set of software attributes that are  needed to avoid unexpected effects from specified modifications. This characteristic describes the ease with which the software product can be changed.

Portability – A set of software attributes that are needed for software to be transferred from one environment to another. This is important when the application is made for using on different distributed platforms.

Internal Metrics are metrics that are static and that do not rely on software execution and describe the internal metrics used to measure the characteristics and sub-characteristics identified in quality model.

External metrics rely on running software and they describe the external metrics used to measure the characteristics and sub characteristics identified in quality model.

Quality in use metrics can be measured only when the final product is used in real environment with real conditions and it identifies the metrics used to measure the effects of the quality Characteristics.

Figure 1: ISO/IEC 9126-1 external and internal quality attributes.

In the past years ISO 9000 has proven to be very important and effective tool that cannot be overlooked. According to a study done in Sweden which was focused on factors for implementing the standard, benefits gained after implementation and motives for implementing it, it was determined that the essential interests for getting certification is to increase corporate reputation and quality.

Idea for certification.

ISO 9001 has a goal to implement a group of requirements that when definitely implemented, should supply the costumer and the retailer with confirmation that the goods and services supplied:

• Meet the needs and expectations

• Comply with applicable regulations

copyleft, gnu-linux, creative commons, GPL, DRM. – (Manjunath M)

Creative Commons

Creative Commons helps you share your knowledge and creativity with the world. Creative Commons develops, supports, and stewards legal and technical infrastructure that maximizes digital creativity, sharing, and innovation.

Creative Commons (CC) is a non profit organization headquartered in Mountain view, California, United States devoted to expanding the range of creative works available for others to build upon legally and to share. The organization has released several Copyright licenses known as Creative Common licenses free of charge to the public.

These licenses allow creators to communicate which rights they reserve, and which rights they Waive for the benefit of recipients or other creators. An easy to understand one-page explanation of rights, with associated visual symbols, explains the specifics of each Creative Commons license. Creative Commons licenses do not replace copyright, but are based upon it. They replace individual negotiations for specific rights between copyright owner (Licensor) and license,which are necessary under an “all rights reserved” copyright management with a “some rights reserved” management employing standardized licenses for re-use cases where no commercial compensation is sought by the copyright owner. The result is an agile, low overhead and cost copyright management regime, profiting both copyright owners and licensees.

GPL

The GNU General Public License is a free software license (one of many ) created to protect the four essential freedoms of software users. Those freedoms are:

Freedom 0. the freedom to use the software for any purpose,

Freedom 1. the freedom to change the software to suit your needs,

Freedom 2. the freedom to share the software with your friends and neighbors, and

Freedom 3. the freedom to share the changes you make.

The license is the child of Richard M Stallman – the founder of the free softwrae movement and the free softwrae foundation – the not-for-profit organization that was born from his vision. There are multiple incarnations of the license, each with their own distinct purposes:

GNU General Public License (GPL) – Designed to protect and enforce the aforementioned freedoms. All derivative works must also be licensed under the GPL or a GPL compatible license.

GNU Lesser General Public License (LGPL) – Designed to protect the four freedoms, but permits usage in proprietary applications to a limited degree. Derivative works must still be licensed under a GPL-compatible license.

GNU Affero General Public License (AGPL) – The most noble and powerful of the GPL licenses. It is an extension of the GNU GPL, with an added clause requiring that users accessing sodtware through a server must have access to the softwares source code (e.g. a website/web service). All derivative works must be licensed under the AGPL or a compatible license.

The GPL is a guardian to those who believe that the user should, above all else, be free to use, alter and distribute the program as they wish. The GPL is a curse to those who wish to take advantage of their users by creating proprietary software in order to control and exploit them.

Copyleft

Copyleft is a play on thw word copyright to describe the practice of using copyright law to offer the right to distribute copies and modified versions of a work and requiring that the same rights be preserved in modified versions of the work. In other words, copyleft is a general method for making a program (or other work) free (libre), and requiring all modified and extended versions of the program to be free as well. This free does not necessarily mean free of cost (gratis), but free as in freely available to be modified.

Copyleft is a form of licensing and can be used to maintain copyright conditions for works such as computer software, documents and art . In general, copyright law is used by an author to prohibit others from reproducing, adapting, or distributing copies of the author’s work. In contrast, under copyleft, an author may give every person who receives a copy of a work permission to reproduce, adapt or distribute it and require that any resulting copies or adaptations are also bound by the same licensing agreement.

Copyleft licenses (for software) require that information necessary for reproducing and modifying the work must be made available to recipients of the executable. The source code  files will usually contain a copy of the license terms and acknowledge the author(s).

Copyleft type licenses are a novel use of existing copyright law to ensure a work remains freely available. The  GNU General Public License , originally written by Richard M Stallman, was the first copyleft license to see extensive use, and continues to dominate the licensing of copy lefted software.Creative Commons, a  non profit organization founded by Lawrence Lessing, provides a similar license provision condition called share alike.

DRM

Digital rights management (DRM) is a class of controversial acess control technologies that are used by hardware manufacturers, publishers, copyright holders, and individuals with the intent to limit the use of digital content and devices after . DRM is any technology that inhibits uses of digital content that are not desired or intended by the content provider. DRM also includes specific instances of digital works or devices.  the digital Millenium copyright Act (DMCA) was passed in the United States to impose criminal penalties on those who make available technologies whose primary purpose and function  are to circumvent content protection technologies.

The use of digital rights management is not universally accepted. Some content providers claim that DRM is necessary to fight copyright infringement online and that it can help the copyright holder maintain artistic control or ensure continued revenue streams. Those opposed to DRM contend there is no evidence that DRM helps prevent copyright infringement, arguing instead that it serves only to inconvenience legitimate customers, and that DRM helps big business stifle innovation and competition. Further, works can become permanently inaccessible if the DRM scheme changes or if the service is discontinued. Proponents argue that digital locks should be considered necessary to prevent “intellectual property” from being copied freely, just as physical locks are needed to prevent personal property from being stolen.

Digital locks placed in accordance with DRM policies can also restrict users from doing something perfectly legal, such as making backup copies of CDs or DVDs, lending materials out through a library, accessing works in the public domain, or using copyrighted materials for research and education under fair use laws. Some opponents, such as the Free software foundation (FSF) through its Defective by design campaign, maintain that the use of the word “rights” is misleading and suggest that people instead use the term “digital restrictions management”. Their position is that copyright holders are restricting the use of material in ways that are beyond the scope of existing copyright laws, and should not be covered by future laws. TheElectronic Frontier foundation  (EFF) and the FSF consider the use of DRM systems to beanti compertitive practice.