Author Archives: kishore

Unit 2: Need for institutions to standardize design process – kishore k

Standardisation refers to the process of developing and implementing technical standards.

Standardized Design Process depends on the collection and dissemination and it drastically reduces learning curve for new users.Less training burdens on current staff. Less time spent looking for answers. Less out-of-standard errors. Far less time to develop new products in conjunction with more time to spend on them.

Some of the Standardized Design Process solutions are:

Rapid creation and deployment of new standards
Easy access to standards
Assured compliance with regulatory, company, industry, and other standards.
Standards are never outdated
Homogenization of “tribal” knowledge
Higher stakeholder buy-in
Institutions seek to standardize processes for several important reasons. Within an institution, standardization can facilitate communications about how the business operates, enable smooth handoffs across process boundaries, and make possible comparative measures of performance. Across companies, standard processes can make commerce easier for the same reasons—better communications, more efficient handoffs, and performance benchmarking. Since information systems support processes, standardization allows uniform information systems within institutions as well as standard systems interfaces among different firms.

Standard processes also allow easier outsourcing of process capabilities. In order to effectively outsource processes, organizations need a means of evaluating three things in addition to cost. First is the external provider’s set of activities and how they flow. institutions need a set of standards for process activities so that they can communicate easily and efficiently when discussing outsourced processes.

Secondly, institutions need a set of process management standards that indicate how well their processes are managed and measured and whether they’re on course for continuous improvement.  Process management standards are based on the assumption that good process management will eventually result in good process flows and performance. In some domains such as information technology and manufacturing, these standards are already in wide use (via the Software Engineering Institute’s Capability Maturity Model and the ISO 9000 series, respectively).

Example : When the Capability Maturity Model model is applied to an existing organization’s software-development processes, it allows an effective approach toward improving them.

ISO 9000 quality management approach and ISO 9001 registration assist organization and it assures customers that the company has a good Quality Management System (QMS) in place. The concept of executing an ISO 9000 quality process allows company to have access to a wider market for the products, particularly in the international arena and also within our own country.

ISO 9001:2008 for organizaitions contains the requirements an organization must comply with to become ISO 9001 Registered.and ISO 9004:2009 containg Managing for the sustained success of an organization.Other ISO quality standards were created to support the ISO 9000 family which includes -ISO 10001 (2007) Quality Management – Customer Satisfaction.Guidelines for Codes of Conduct for Organizations.ISO 10004 (2010) Processes to monitor/measure customer satisfaction.ISO 10005 (2005) Guidelines for the development, review, acceptance, application and revision of quality plans.ISO 10006 (2003) Guidance on the application of quality management in projects.

For health care information infrastructures following standards were defined:

Structure for Nomenclature, Classification and Coding of Properties in Clinical Laboratory Sciences
Structure for Classification and Coding of Surgical Procedures
Medicinal Product Identification Standard,Medical Imaging and Related Data Interchange Format Standards,Medical Image Management Standard

 

unit 5: copyleft, gnu-linux, creative commons, GPL, DRM – kishore k

Copyleft is a general method for making a program free, and Copyleft is a type of license that attempts to ensure that the public retains the freedom to use, modify, extend and redistribute a creative work

Copyleft also provides an incentive for other programmers to add to free software. Important free programs such as the GNU C++ compiler exist only because of this.

The term copyleft is a play on the word copyright, Copyright does not protect facts, discoveries, ideas, systems or methods of operation, although it can protect the way they are expressed.  On the other hand Copyleft helps programmers who want to contribute improvements to free software get permission to do so. . Copyleft licenses require that information necessary for reproducing and modifying the work must be made available to recipients of the executable.

In the case of computer software, the form that facilitates further modification in source code is copyleft licenses which require that the source code be made freely available to anyone.To copyleft a program, it is first state that it is copyrighted; then it is added with distribution terms, which are a legal instrument that gives everyone the rights to use, modify, and redistribute the program’s code, or any program derived from it, but only if the distribution terms are unchanged.

GNU/Linux, or simply Linux, is an alternative to Microsoft Windows. It is easy to use and gives more freedom to users. Anyone can install it: Linux is free as in freedom, and often available free of charge.The Free Software Foundation views Linux distributions that use GNU software as GNU variants and they ask that such operating systems be referred to as GNU/Linux or a Linux-based GNU system.

It is possible to write good free software without thinking of GNU; much good work has been done in the name of Linux also. But the term “Linux” has been associated ever since it was first coined with a philosophy that does not make a commitment to the freedom to cooperate.

A great challenge to the future of free software comes from the tendency of the “Linux” distribution companies to add nonfree software to GNU/Linux in the name of convenience and power. All the major commercial distribution developers do this; none limits itself to free software. Most of them do not clearly identify the nonfree packages in their distributions. Many even develop nonfree software and add it to the system. Some outrageously advertise “Linux” systems that are “licensed per seat”, which give the user as much freedom as Microsoft Windows.

Creative Commons helps us to share our knowledge and creativity with the world.

creative commons’s easy-to-use copyright licenses provide a simple, standardized way to give the public permission to share and use your creative work — on conditions of your choice.

Creative Commons (CC) is a non-profit organization devoted to expanding the range of creative works available for others to build upon legally and to share. The organization has released several copyright-licenses known as Creative Commons licenses free of charge to the public. These licenses allow creators to communicate which rights they reserve, and which rights they waive for the benefit of recipients or other creators.

For Example : If we want to give people the right to share, use, and even build upon a work we’ve created, we should consider publishing it under a Creative Commons license. CC gives us flexibility and protects the people who use our work, so people don’t have to worry about copyright infringement, as long as they abide by the conditions we have specified.

GNU refers to GENERAL PUBLIC LICENSE, is a free, copyleft license for software.

The licenses for most software and other practical works are designed to take away our freedom to share and change the works. By contrast, the GNU General Public License is intended to guarantee our freedom to share and change all versions of a program–to make sure it remains free software for all its users.

The GPL grants the recipients of a computer program the rights of the Free Software Definition and uses copyleft to ensure the freedoms are preserved whenever the work is distributed, even when the work is changed or added to. The GPL is a copyleft license, which means that derived works can only be distributed under the same license terms. It was the first copyleft license for general use. GNU GPL protect our rights with two steps: (1) assert copyright on the software, and (2) offer you this License giving you legal permission to copy, distribute and/or modify it.

DRM refers to Digital Restrictions Management is technology that controls what you can do with the digital media and devices you own. When a program doesn’t let you share a song, read an ebook on another device, or play a single-player game without an internet connection, you are being restricted by DRM. In other words, DRM creates a damaged good. It prevents you from doing what would normally be possible if it wasn’t there, and this is creating a dangerous situation for freedom, privacy and censorship.

DRM is designed to take all of the incredible possibilities enabled by digital technologies and place them under the control of a few, who can then micromanage and track everything we do with our media. This creates the potential for massive digital book burnings and large scale surveillance over people’s media viewing habits.

DRM gives media and technology companies the ultimate control over every aspect of what people can do with their media: where they can use it, on what devices, using what apps, for how long, and any other conditions the retailer wants to set. Digital media has many advantages over traditional analog media, but DRM attempts to make every possible use of digital goods something that must be granted permission for. This concentrates all power over the distribution of media into the hands of a few companies. For example, DRM gives ebook sellers the power to remotely delete all copies of a book, to keep track of what books readers are interested in and, with some software, even what notes they take in their books.

unit 4: Test Driven Development vs. software testing – kishore k

Test Driven Development uses the “test first” approach in which test cases and the unit tests are written before code is written. The unit tests are written by the developers and once the unit tests pass the developers start developing the application and code the application. The software development becomes a series of very short iterations in which test cases drive the creation of software and ultimately the design of the program.

The unit tests forms the pillar for the development of the application in a TDD Development Environment  Kent Beck, who is credited with having developed the technique, stated that TDD encourages simple designs and inspires confidence.

Test-driven development cycle

TDD-Development-Cycle

What makes TDD unique is its focus on design specification rather than validation. The TDD approach leads to higher quality software architecture by improving quality, simplicity, and testability while increasing code confidence and team productivity.

The basic steps of TDD are :

Step #1- Create/add a new test.:

To write a test, the developer must clearly understand the feature’s specification and requirements. The developer can accomplish this through use cases that cover the requirements and exception conditions.

Step #2- Run the test to ensure that it fails.

This step rules out the possibility that the new test will always pass, and therefore be worthless. The new test should also fail for the expected reason. This increases confidence  that it is testing the right thing, and will pass only in intended cases.

Step #3- Write just enough code so that the test will pass.

The next step is to write some code that will cause the test to pass. The new code written at this stage will not be perfect and may, for example, pass the test in an inelegant way.

Step #4- Refactor the new code to remove duplication and unnecessary complication.

It is important that the code written is only designed to pass the test; no further and therefore untested functionality should be predicted and ‘allowed for’ at any stage.If all test cases now pass, the programmer can be confident that the code meets all the tested requirements.

Step #5- Ensure that refactoring did not break the code under test.

The concept of removing duplication is an important aspect of any software design.
By re-running the test cases, the developer can be confident that code refactoring is not damaging any existing functionality

Step #6- Repeat this process until all tests have been written and passed.

After the successive steps the repeat process is carried out so that the by starting with another new test, the cycle is then repeated to push forward the functionality

 

Software testing is a process used to identify the correctness, completeness, and quality of developed computer software. It includes a set of activities conducted with the intent of finding errors in software so that it could be corrected before the product is released to the end users.

In simple words, software testing is an activity to check whether the actual results match the expected results and to ensure that the software system is defect free.

Software testing can also provide an objective, independent view of the software to allow the business to appreciate and understand the risks of software implementation. Test techniques include the process of executing a program or application with the intent of finding software errors or other defects.
Software testing can be intended to,
• meets the requirements that guided its design and development,
• works as expected,
• can be implemented with the same characteristics,
• and satisfies the needs of stakeholders.

Conclusion :

With software testing process,

we cannot test a program completely
We can only test against system requirements
May not detect errors in the requirements.
Incomplete or ambiguous requirements may lead to inadequate or incorrect testing.
Exhaustive (total) testing is impossible in present scenario.
Time and budget constraints normally require very careful planning of the testing effort.
Compromise between thoroughness and budget.
Test results are used to make business decisions for release dates.
* Even if we do find the last bug, we’ll never know it
* we will run out of time before we run out of test cases
* we cannot test every path
* we cannot test every valid input
* we cannot test every invalid input

On the other hand with TDD approach we can achieve

Improved Quality
The TDD approach forces each component to steadily build capability at a consistently higher level of quality by testing first, developing in small increments, and requiring tests to pass. Once individual component development switches to integration, this high level of component quality leads to rapid integration and more effective testing.

Promotes Simplicity
During the implementation of a feature, the test-driven developer writes the simplest code to make the tests pass. TDD helps developers avoid over-engineering solutions due to its short development micro-cycle. It also helps break the problem into modular units and produce a solution that can be tested as well as implemented.

Cuts Time-to-Market
The TDD cycle is shorter than the traditional development cycle. By focusing on building only what is needed to get the tests to pass and using immediate feedback to reduce regression errors, the development cycle can quickly be reduced to its essential core.

Enhances Flexibility

TDD boosts a software development organization’s ability to rapidly respond to changing requirements or unanticipated product updates by facilitating shorter development and integration cycles and supporting the rapid refinement of new requirements. A solid TDD culture with a rich test base is the best preparation to rapidly seize opportunities and beat competitors to market.

 

unit 3 : programming paradigm – kishore k

A programming paradigm is a fundamental style of computer programming. There are four main paradigms: object-oriented, imperative, functional and logic programming.

Pragramming paradigm’s foundations are distinct models of computation: Turing machine for object-oriented and imperative programming, lambda calculus for functional programming, and first order logic for logic programming.

Object-oriented programming (OOP) is a programming paradigm that represents concepts as “objects” that have data fields and associated procedures known as methods. Objects, which are instances of classes, are used to interact with one another to design applications and computer programs.

» Imperative : Machine-model based
» Functional : Equations; Expression Evaluation
» Logical : First-order Logic Deduction
» Object-Oriented : Programming with Data Types
• Classification:

image

Imperative paradigms
• It is based on commands that update variables in storage. The Latin word imperare means “to command”.
• The language provides statements, such as assignment statements, which explicitly change the state of the memory of the computer.
• This model closely matches the actual executions of computer and usually has high execution efficiency
• Many people also find the imperative paradigm to be a more natural way of expressing themselves.

Functional programming paradigms
• In this paradigm we express computations as the evaluation of mathematical functions.
• Functional programming paradigms treat values as single entities. Unlike variables, values are
never modified. Instead, values are transformed into new values.
• Computations of functional languages are performed largely through applying functions to
values, i.e., (+ 4 5).

Logic programming paradigms
• In this paradigm we express computation in exclusively in terms of mathematical logic.
• While the functional paradigm emphasizes the idea of a mathematical function, the logic paradigm
focuses on predicate logic, in which the basic concept is a relation.
• Logic languages are useful for expressing problems where it is not obvious what the functions should
be.
• For example consider the uncle relationship: a given person can have many uncles, and another person can be uncle to many nieces and nephews.
• Let us consider now how we can define the brother relation in terms of simpler relations and properties father, mother, and male. Using the Prolog logic language one can say:
brother(X,Y) /* X is the brother of Y */
/* if there are two people F and M for which*/
father(F,X), /* F is the father of X */
father(F,Y), /* and F is the father of Y */
mother(M,X), /* and M is the mother of X */
mother(M,Y), /* and M is the mother of Y */
male(X). /* and X is male */

The Object-Oriented Paradigm
• OO programming paradigm is not just a few new features added to a programming language, but it a new way of thinking about the process of decomposing problems and developing programming solutions
• Alan Kay characterized the fundamental of OOP as follows:
• Everything is modeled as object
• Computation is performed by message passing: objects communicate with one another via message passing.
• Every object is an instance of a class where a class represents a grouping of
similar objects.
• Inheritance: defines the relationships between classes.
• The Object Oriented paradigm focuses on the objects that a program is representing, and on
allowing them to exhibit “behavior”.
• Unlike imperative paradigm, where data are passive and procedures are active, in the O-O
paradigm data is combined with procedures to give objects, which are thereby rendered
active.

A multi-paradigm programming language is a programming language that supports more than one programming paradigm. As Leda designer Timothy Budd puts it: “The idea of a multiparadigm language is to provide a framework in which programmers can work in a variety of styles, freely intermixing constructs from different paradigms.” The design goal of such languages is to allow programmers to use the best tool for a job, admitting that no one paradigm solves all problems in the easiest or most efficient way.

One example is C#, which includes imperative and object-oriented paradigms as well as some support for functional programming through type inference, anonymous functions and Language Integrated Query. Some other ones are F# and Scala, which provides similar functionality to C# but also includes full support for functional programming. Perhaps the most extreme example is Oz, which has subsets that are logic, a functional, an object-oriented, a dataflow concurrent, and other language paradigms.

Unit 1: A Spiral Model of Software Development and Enhancement – kishore k

In software development process there was a need to maintain and follow a standard process to develop a software. people started to develop process model, winston W. Royce in the year 1970 came up with waterfall model.

The waterfall model served the purpose in the beginning but due to it’s linear design process failed to serve the purpose. In the year 1986 Barry Boehm came up with A Spiral Model of Software Development and Enhancement.

The spiral model is an iterative model for the software development process which added advantage through continuous refinement in software development phases like requirement phase, analysis phase, design phase and implementation.

Spiral model of software development establishes the transition criteria for progressing from one stage to the next with refinement on each iteration.

The radial dimension in Figure represents the cumulative cost incurred in accomplishing the steps to date; the angular dimension represents the progress made in completing each cycle of the spiral.

1

Each cycle of the spiral begins with the identification of  goal of portion of the product being elaborated,the alternative means of implementing this portion of the product and the constraints imposed on the application of the alternatives. Frequently this process with identify areas of uncertainly that are significant sources of project risk, once the risk is evaluated- the next step is determined by the relative remaining risks of the product development.

The risk-resolution activities done in the phase1 of spiral model includes surveys and analyses, including structured interviews of software developers and managers. plan in the next phase involves a partitioning into seperate activities to address managemnet improvements, facilities development and development of the  increments of a software development environment.

The key characteristic of a Spiral model is risk management at regular stages in the development cycle.

The Spiral is visualized as a process passing through some number of iterations, with the four quadrant diagram representative of the following activities:

  1. Formulate plans to: identify software targets, selected to implement the program, clarify the project development restrictions
  2. Risk analysis: an analytical assessment of selected programs, to consider how to identify and eliminate risk
  3. Implementation of the project: the implementation of software development and verification

Risk-driven spiral model, emphasizing the conditions of options and constraints in order to support software reuse, software quality can help as a special goal of integration into the product development. However, the spiral model has some restrictive conditions, as follows:

  1. The spiral model emphasizes risk analysis, and thus requires customers to accept this analysis and act on it. This requires both trust in the developer as well as the willingness to spend more to fix the issues, which is the reason why this model is often used for large-scale internal software development.
  2. If the implementation of risk analysis will greatly affect the profits of the project, the spiral model should not be used.
  3. Software developers have to actively look for possible risks, and analyze it accurately for the spiral model to work.

The first stage is to formulate a plan to achieve the objectives with these constraints, and then strive to find and remove all potential risks through careful analysis and, if necessary, by constructing a prototype. If some risks can not be ruled out, the customer has to decide whether to terminate the project or to ignore the risks and continue anyway. Finally, the results are evaluated and the design of the next phase begins.

Spiral model has helped software engineers who can get there hands in and start working on project earlier. it is more able to cope with changes that software development entails. spiral model estimates to get more realistic as work progress. spiral model adds features in phases. product that is implemented using spiral model has increased it’s productivity to a larger extent.