Bharat Banate's Work Profile

View Bharat Banate's profile on LinkedIn

Wednesday, October 31, 2007

IT News:Democratising IT

A New Model For PC Penetration
INDIA has emerged as a global leader in the advance of information technology. Yet the country faces a fundamental challenge — building on its successes by enabling greater access to technology for its people. This will drive expanded economic growth and opportunity. Less than 3% of Indians own a personal computer — compared to nearly 8% of Chinese, almost 14% of Brazilians and more than 15% of Russians. Despite the very low penetration of computers in India, the impact has been profound. India is home to three of the world’s 10 biggest IT firms — Tata, Infosys, and Wipro, and already generates nearly $40 billion in revenues from its IT software and services sector. Nasscom forecasts this figure to grow by nearly 27% next year. It must be recognised that the benefits of broader IT use and deeper Internet access are substantial, and will be a catalyst for — not a result of — economic growth and modernisation. India is already benefiting from e-governance initiatives that deliver real-time tallying of results of the world’s largest elections and from technology-driven distance learning that brings the world’s educational resources to students without regard to location or economic background. But cost has been a major roadblock for broader technology adoption in India. Reducing taxes and tariffs is essential to facilitating broader access to technology and driving growth in the technology sectors. Global hardware exports are 43% of Chinese exports versus only 2.3% for India. India is clearly missing out on a big opportunity. If it doesn’t act soon, investments will go further into China and emerging countries such as Vietnam, instead of India.
Consider also that, in India, a typical desktop computer costs 44% of the average Indian’s annual wage. Brazil’s experience in supporting technology adoption is particularly instructive. Since reducing taxes on computer purchases two years ago, the PC market tripled, and more than two million families bought their first PC, making Brazil the world’s fourth-largest PC market. What was more important was the multiplier effect this had on the economy. Thousands of IT industry jobs were created and government revenue from the IT sector increased by 50%. But cost isn’t the only barrier. IT complexity will also threaten access to technology while increasing its cost and environmental impact. We are all members of what we at Dell call the ReGeneration — a new global movement concerned with the regeneration of not just our businesses but also our planet. Environmental protection efforts are improving, as reflected in the Nobel Prize jointly awarded to former US vice-president Al Gore and the Intergovernmental Panel on Climate Change headed by Rajendra Pachauri. And technology is an important part of these efforts. The future will bring even more benefits.
By 2020 microprocessors will run one thousand times as many computations per second as they do today. That will mean enormous gains in productivity and efficiency, giving people unimaginable power to access, organise, and transform information. Indian citizens will more fully benefit from this progress as government and industry leaders strengthen their cooperation. This will help create the conditions in which IT can flourish and reach all people, businesses, and institutions across the country. India plays a pivotal role in global IT. Technology users in the western world benefit every day from the work of bright, talented Indian employees and their constant innovation. But more than serving as the world’s software writer or back office, India is harnessing the productivity, efficiency, and innovation benefits of IT as a foundation for global economic competitiveness. I see industry working, with great commitment, with India’s government to build on this progress, and to help further democratize access to technology, so that more Indian citizens enjoy even more of technology’s benefits with an ever-decreasing impact on our environment. That is our shared responsibility. By harnessing these forces — the democratization and simplification of technology, we can make a positive impact not just on our economies, but also our planet.

(Michael Dell)

Sunday, October 28, 2007

Software Testing:Introduction

Introduction
Software Testing is the process of executing a program or system with the intent of finding errors. [Myers79] Or, it involves any activity aimed at evaluating an attribute or capability of a program or system and determining that it meets its required results. [Hetzel88] Software is not unlike other physical processes where inputs are received and outputs are produced. Where software differs is in the manner in which it fails. Most physical systems fail in a fixed (and reasonably small) set of ways. By contrast, software can fail in many bizarre ways. Detecting all of the different failure modes for software is generally infeasible. [Rstcorp]

Unlike most physical systems, most of the defects in software are design errors, not manufacturing defects. Software does not suffer from corrosion, wear-and-tear -- generally it will not change until upgrades, or until obsolescence. So once the software is shipped, the design defects -- or bugs -- will be buried in and remain latent until activation.

Software bugs will almost always exist in any software module with moderate size: not because programmers are careless or irresponsible, but because the complexity of software is generally intractable -- and humans have only limited ability to manage complexity. It is also true that for any complex systems, design defects can never be completely ruled out.

Discovering the design defects in software, is equally difficult, for the same reason of complexity. Because software and any digital systems are not continuous, testing boundary values are not sufficient to guarantee correctness. All the possible values need to be tested and verified, but complete testing is infeasible. Exhaustively testing a simple program to add only two integer inputs of 32-bits (yielding 2^64 distinct test cases) would take hundreds of years, even if tests were performed at a rate of thousands per second. Obviously, for a realistic software module, the complexity can be far beyond the example mentioned here. If inputs from the real world are involved, the problem will get worse, because timing and unpredictable environmental effects and human interactions are all possible input parameters under consideration.

A further complication has to do with the dynamic nature of programs. If a failure occurs during preliminary testing and the code is changed, the software may now work for a test case that it didn't work for previously. But its behavior on pre-error test cases that it passed before can no longer be guaranteed. To account for this possibility, testing should be restarted. The expense of doing this is often prohibitive. [Rstcorp]

An interesting analogy parallels the difficulty in software testing with the pesticide, known as the Pesticide Paradox [Beizer90]: Every method you use to prevent or find bugs leaves a residue of subtler bugs against which those methods are ineffectual. But this alone will not guarantee to make the software better, because the Complexity Barrier [Beizer90] principle states: Software complexity(and therefore that of bugs) grows to the limits of our ability to manage that complexity. By eliminating the (previous) easy bugs you allowed another escalation of features and complexity, but his time you have subtler bugs to face, just to retain the reliability you had before. Society seems to be unwilling to limit complexity because we all want that extra bell, whistle, and feature interaction. Thus, our users always push us to the complexity barrier and how close we can approach that barrier is largely determined by the strength of the techniques we can wield against ever more complex and subtle bugs. [Beizer90]

Regardless of the limitations, testing is an integral part in software development. It is broadly deployed in every phase in the software development cycle. Typically, more than 50% percent of the development time is spent in testing. Testing is usually performed for the following purposes:
To improve quality.
As computers and software are used in critical applications, the outcome of a bug can be severe. Bugs can cause huge losses. Bugs in critical systems have caused airplane crashes, allowed space shuttle missions to go awry, halted trading on the stock market, and worse. Bugs can kill. Bugs can cause disasters. The so-called year 2000 (Y2K) bug has given birth to a cottage industry of consultants and programming tools dedicated to making sure the modern world doesn't come to a screeching halt on the first day of the next century. [Bugs] In a computerized embedded world, the quality and reliability of software is a matter of life and death.

Quality means the conformance to the specified design requirement. Being correct, the minimum requirement of quality, means performing as required under specified circumstances. Debugging, a narrow view of software testing, is performed heavily to find out design defects by the programmer. The imperfection of human nature makes it almost impossible to make a moderately complex program correct the first time. Finding the problems and get them fixed [Kaner93], is the purpose of debugging in programming phase.

For Verification & Validation (V&V)
Just as topic Verification and Validation indicated, another important purpose of testing is verification and validation (V&V). Testing can serve as metrics. It is heavily used as a tool in the V&V process. Testers can make claims based on interpretations of the testing results, which either the product works under certain situations, or it does not work. We can also compare the quality among different products under the same specification, based on results from the same test.

We can not test quality directly, but we can test related factors to make quality visible. Quality has three sets of factors -- functionality, engineering, and adaptability. These three sets of factors can be thought of as dimensions in the software quality space. Each dimension may be broken down into its component factors and considerations at successively lower levels of detail. Table 1 illustrates some of the most frequently cited quality considerations.
Good testing provides measures for all relevant factors. The importance of any particular factor varies from application to application. Any system where human lives are at stake must place extreme emphasis on reliability and integrity. In the typical business system usability and maintainability are the key factors, while for a one-time scientific program neither may be significant. Our testing, to be fully effective, must be geared to measuring each relevant factor and thus forcing quality to become tangible and visible. [Hetzel88]

Tests with the purpose of validating the product works are named clean tests, or positive tests. The drawbacks are that it can only validate that the software works for the specified test cases. A finite number of tests can not validate that the software works for all situations. On the contrary, only one failed test is sufficient enough to show that the software does not work. Dirty tests, or negative tests, refers to the tests aiming at breaking the software, or showing that it does not work. A piece of software must have sufficient exception handling capabilities to survive a significant level of dirty tests.

A testable design is a design that can be easily validated, falsified and maintained. Because testing is a rigorous effort and requires significant time and cost, design for testability is also an important design rule for software development.

For reliability estimation
Software reliability has important relations with many aspects of software, including the structure, and the amount of testing it has been subjected to. Based on an operational profile (an estimate of the relative frequency of use of various inputs to the program [Lyu95]), testing can serve as a statistical sampling method to gain failure data for reliability estimation.

Software testing is not mature. It still remains an art, because we still cannot make it a science. We are still using the same testing techniques invented 20-30 years ago, some of which are crafted methods or heuristics rather than good engineering methods. Software testing can be costly, but not testing software is even more expensive, especially in places that human lives are at stake. Solving the software-testing problem is no easier than solving the Turing halting problem. We can never be sure that a piece of software is correct. We can never be sure that the specifications are correct. No verification system can verify every correct program. We can never be certain that a verification system is correct either.

Saturday, October 20, 2007

IT News:Cognizant pips Infy to acquire marketRx for $135 m

Cognizant pips Infy to acquire marketRx for $135 m
COGNIZANT Technology Solutions, an IT and BPO services company, said it will pay $135 million in cash to acquire New Jersey-based marketRx, a provider of analytics and related software services to life sciences companies. It’s the biggest acquisition made by Cognizant so far, and among the biggest in high-end business process outsourcing space recently.
In a deal structured on Thursday night, Cognizant is believed to have pipped IT services giant Infosys Technologies in bagging the deal. Earlier, ET reported that a host of suitors, including biggies Infosys and Wipro, had evinced interest in acquiring marketRx, one of the largest, independent offshore KPO businesses. Two investment banks, Mumbai-based Avendus and US-based William Blair, advised marketRx in sealing the deal.
This is the second high-profile dealmaking in the KPO space, after WNS snapped up Bangalore headquartered Marketics for $65 million earlier this year. Sources said while Infosys’ valuation of the marketRx was higher, Cognizant offered upfront cash pay out. It is believed that Infy’s offer had a substantial chunk of earnings payout, which essentially means a combination of upfront money and the remaining to be based on future performance.
The Nasdaq-listed Cognizant with predominant operations in India said the acquisition would help strengthen its analytics unit and offer more services to life sciences industry. The deal is expected to close in fourth quarter of 2007, and would be funded from its cash reserves. marketRx with per employee revenue of about $100,000 is projected to report revenues of over $40 million in 2007. It has 430 people, with 260 in Gurgaon, 160 in the US (four locations), and 10 in London.
Cognizant is expected to cross 55,000 people by end of this year and with revenue guidance of $2.11 billion in 2007. It’s manpower addition this year is expected to be over 16,000, 40 times the strength of marketRx. Cognizant president R Chandrasekaran said: “It is a ‘tuck-under’ acquisition that is consistent with our acquisition strategy of selectively acquiring businesses that complement or enhance our business model and value to our customers.
Cognizant president and CEO Francisco D’- Souza said: “This acquisition expands our capabilities in the analytics segment and broadens our service offerings for the life sciences industry while providing strong synergies with our existing business intelligence/data warehousing and CRM (customer relationship management) services.”
marketRx has a proven global delivery model for analytics, deep domain knowledge and proprietary analytics software platform, he said. “We expect to leverage these assets to establish a pre-eminent position in the fast-growing analytics market both in life sciences and other industries,” Mr D’Souza said in a statement.
marketRx president & CEO Jaswinder (Jassi) Chadha said: “The combination of our market leading position in the life sciences analytics segment and Cognizant’s strengths as a top global services player will allow us to expand our relationships with our life sciences clients by providing them with a broader range of outsourced services, and conversely enables us to extend our capabilities to other vertical markets.”

Friday, October 19, 2007

Mainframe:Mainframe Server Software Architectures

Mainframe Server Software Architectures

Purpose and OriginSince 1994 mainframes have been combined with distributed architectures to provide massive storage and to improve system security, flexibility, scalability, and reusability in the client/server design. In a mainframe server software architecture, mainframes are integrated as servers and data warehouses in a client/server environment. Additionally, mainframes still excel at simple transaction-oriented data processing to automate repetitive business tasks such as accounts receivable, accounts payable, general ledger, credit account management, and payroll. Siwolp and Edelstein provide details on mainframe server software architectures see [Siwolp 95, Edelstein 94].

Technical DetailWhile client/server systems are suited for rapid application deployment and distributed processing, mainframes are efficient at online transactional processing, mass storage, centralized software distribution, and data warehousing [Data 96]. Data warehousing is information (usually in summary form) extracted from an operational database by data mining (drilling down into the information through a series of related queries). The purpose of data warehousing and data mining is to provide executive decision makers with data analysis information (such as trends and correlated results) to make and improve business decisions.
a mainframe in a three tier client/server architecture. The combination of mainframe horsepower as a server in a client/server distributed architecture results in a very effective and efficient system. Mainframe vendors are now providing standard communications and programming interfaces that make it easy to integrate mainframes as servers in a client/server architecture. Using mainframes as servers in a client/server distributed architecture provides a more modular system design, and provides the benefits of the client/server technology.

Using mainframes as servers in a client/server architecture also enables the distribution of workload between major data centers and provides disaster protection and recovery by backing up large volumes of data at disparate locations. The current model favors "thin" clients (contains primarily user interface services) with very powerful servers that do most of the extensive application and data processing, such as in a two tier architecture. In a three tier client/server architecture, process management (business rule execution) could be off-loaded to another server.


Usage ConsiderationsMainframes are preferred for big batch jobs and storing massive amounts of vital data. They are mainly used in the banking industry, public utility systems, and for information services. Mainframes also have tools for monitoring performance of the entire system, including networks and applications not available today on UNIX servers [Siwolp 95].
New mainframes are providing parallel systems (unlike older bipolar machines) and use complementary metal-oxide semiconductor (CMOS) microprocessors, rather than emitter-coupler logic (ECL) processors. Because CMOS processors are packed more densely than ECL microprocessors, mainframes can be built much smaller and are not so power-hungry. They can also be cooled with air instead of water [Siwolp 95].

While it appeared in the early 1990s that mainframes were being replaced by client/server architectures, they are making a comeback. Some mainframe vendors have seen as much as a 66% jump in mainframe shipments in 1995 due to the new mainframe server software architecture [Siwolp 95].

Given the cost of a mainframe compared to other servers, UNIX workstations and personal computers (PCs), it is not likely that mainframes would replace all other servers in a distributed two or three tier client/server architecture.


MaturityMainframe technology has been well known for decades. The new improved models have been fielded since 1994. The new mainframe server software architecture provides the distributed client/server design with massive storage and improved security capability. New technologies of data warehousing and data mining data allow extraction of information from the operational mainframe server's massive storage to provide businesses with timely data to improve overall business effectiveness. For example, stores such as Wal-Mart found that by placing certain products in close proximity within the store, both products sold at higher rates than when not collocated.1

Costs and LimitationsBy themselves, mainframes are not appropriate mechanisms to support graphical user interfaces. Nor can they easily accommodate increases in the number of user applications or rapidly changing user needs [Edelstein 94].

AlternativesUsing a client/server architecture without a mainframe server is a possible alternative. When requirements for high volume (greater than 50 gigabit), batch type processing, security, and mass storage are minimal, three tier or two tier architectures without a mainframe server may be viable alternatives. Other possible alternatives to using mainframes in a client/server distributed environment are using parallel processing software architecture or using a database machine.

Complementary TechnologiesA complementary technology to mainframe server software architectures is open systems . This is because movement in the industry towards interoperable heterogeneous software programs and operating systems will continue to increase reuse of mainframe technology and provide potentially new applications for mainframe capabilities.

Sunday, October 14, 2007

How To: Burn a CD using Windows XP

You can burn a CD using Windows XP. No special CD-burning software required. All it needs is a CD-R or CD-RW disk, a machine running Windows XP and a CD-RW disk drive.

Step 1
Insert a blank CD-R or CD-RW disk into the CD-RW drive. A pop-up dialog box should appear after Windows loads the CD. (No pop-up dialog box? Open "My Computer" from your desktop and double-click on your CD-RW drive icon.)

Step 2
Double-click the option, "Open writable CD folder using Windows Explorer." You will see the files that are currently on the CD in your CD-RW drive. If you inserted a blank CD, you will see nothing.

Step 3
Click on the "Start" menu, and then "My Computer." (No "My Computer" on your start menu? It is likely you have the Windows Classic Start Menu enabled, and you will have to double-click "My Computer" on the desktop instead.) Navigate to the files that you wish to burn onto the CD.

Step 4
Single-click on the first file you wish to burn. Hold down the "Control" key and continue to single-click on other desired files until you have selected them all. Let go of the "Control" key. All your files should remain selected and appear blue. Right-click on any file and choose "Copy."

Step 5
Go back to the open window that displays the contents of your CD drive. Right-click in the white space and choose "Paste." The pasted icons will appear washed out, and they will have little black arrows on them indicating your next step.

Step 6
Choose "Write these files to CD" on the left-hand menu bar under "CD Writing Tasks." A wizard will start. First, name your CD. You can use up to 16 characters. After typing a name, click "Next." This will start the burning process. When the CD is finished burning, the CD will eject itself.

Step 7
Follow the remaining wizard prompts. It will ask if you want to burn the same files to another CD. If so, click "Yes, write these files to another CD." If not, click "Finish." You're done.

Tips & Warnings
Make sure to test your newly burned CD—try to open a few files to ensure that the process was done correctly.