Bharat Banate's Work Profile

View Bharat Banate's profile on LinkedIn

Monday, December 10, 2007

Enterprise JavaBeans (EJB) :Introduction

Enterprise JavaBeans (EJB) technology is the server-side component architecture for Java Platform, Enterprise Edition (Java EE). EJB technology enables rapid and simplified development of distributed, transactional, secure and portable applications based on Java technology.



The EJB specification intends to provide a standard way to implement the back-end 'business' code typically found in enterprise applications (as opposed to 'front-end' user-interface code). Such code was frequently found to reproduce the same types of problems, and it was found that solutions to these problems are often repeatedly re-implemented by programmers. Enterprise Java Beans were intended to handle such common concerns as persistence, transactional integrity, and security in a standard way, leaving programmers free to concentrate on the particular problem at hand.

EJB types
Stateful Session Beans are distributed objects having state: that is, they keep track of which calling program they are dealing with throughout a session. For example, checking out in a web store might be handled by a stateful session bean, which would use its state to keep track of where the customer is in the checkout process. On the other hand, sending an e-mail to customer support might be handled by a stateless bean, since this is a one-off operation and not part of a multi-step process. Stateful session beans' state may be persisted, but access to the bean instance is limited to only one client. Stateless Session Beans are distributed objects that do not have state associated with them thus allowing concurrent access to the bean. The contents of instance variables are not guaranteed to be preserved across method calls. The lack of overhead to maintain a conversation with the calling program makes them less resource-intensive than stateful beans.

Message Driven Beans were introduced in the EJB 2.0 specification. which is supported by Java 2 Platform, Enterprise Edition 1.3 or higher. The message bean represents the integration of JMS (Java Message Service) with EJB to create an entirely new type of bean designed to handle asynchronous JMS messages. Message Driven Beans are distributed objects that behave asynchronously. That is, they handle operations that do not require an immediate response. For example, a user of a website clicking on a "keep me informed of future updates" box may trigger a call to a Message Driven Bean to add the user to a list in the company's database. (This call is asynchronous because the user does not need to wait to be informed of its success or failure.) These beans subscribe to JMS (Java Message Service) message queues or message topics. They were added in the EJB 2.0 specification to allow event-driven processing inside EJB Container. Unlike other types of beans, MDB does not have a client view (Remote/Home interfaces), i.e. clients can not look-up an MDB instance. It just listens for any incoming message on a JMS queue (or topic) and processes them automatically.

Previous versions of EJB also used a type of bean known as an Entity Bean. These were distributed objects having persistent state. Beans in which their container managed the persistent state were said to be using Container-Managed Persistence (CMP), whereas beans that managed their own state were said to be using Bean-Managed Persistence (BMP). Entity Beans were replaced by the Java Persistence API in EJB 3.0, though as of 2007, CMP 2.x style Entity beans are still available for backward compatibility.

Other types of Enterprise Beans have been proposed. For instance, Enterprise Media Beans (JSR 86) address the integration of multimedia objects in Java EE applications.

Monday, December 3, 2007

The Year 2038 Bug

It's barely 8 years since we had the millenium bug so don't say you didn't get enough warning! A lot of systems in the world may have date rollover troubles in a fraction over 30 years time. The millenium bug (more accurately known as the Two Digit Century Rollover Bug) was caused by using 2 digits instead of 4 for the year. So Christmas 2007 falls on 12/25/07. Of course when 1999 rolled over to 2000 then the first day of the new century became 01/01/00 and this could have had serious consequences had all the old systems not been sorted out in advance. This problem will also happen again in 2099, 2199 etc if anyone is silly enough to keep using two digit year dates.

But the Unix bug will occur in 2038. That's because the date system started in 1970 and uses a time_t (signed int) to hold the number of seconds. The highest value is 2147483648-1 which is 24855.13 days. Add that to Jan 1 1970 and you get Jan 19 2038! So sometime early on that morning of that date, any software using a signed int for a date will rollover to Jan 1 1970! So how you are going to cope up with this problem dudes....!!!

Sunday, November 4, 2007

Storage :1TB Hard Disk Drive


Recently Both Major companies Hitachi and Seagate have launched the disk drives of 1TB(1024 GB) which is major milestone in storage world.
In INDIAN markets currently Hitechi launched the drives of 1 tb.
“Growing volumes of songs, movies, personal videos, pictures and games stored on our PCs highlights a ready market for higher capacity HDDs in India,”.

“At 133 gigabits per square inch, the Hitachi 1TB hard drive crams much more data per square inch than any other HDD available in the market today. Quieter acoustics, lower heat dissipation levels and much faster read/write speeds, makes this family of Hitachi HDDs a must have for all PC users,” he added further.




The 3.5inch drive belongs to the 7200 RPM family of Hitachi 1TB SATA hard disk drives. These storage units can be used for business, commercial, gaming, media centre PCs and also in external storage devices. The new drive is equipped with reliable perpendicular magnetic recoding technology, robust 3GB SATA interface and enhanced Rotational Vibration Safeguard (RVS) technology.

Besides, the 1TB drive ensures the fast data transfer rates, low power consumption and advanced shock protection. Hitachi 1TB SATA hard disk comes along with a 5-year warranty.

System, designed to sustain performance in densely packed multi-drive systems.

Seagate Barracuda 7200.11 hard drive consumes 13W of power in comparison to Hitachi’s 1-TB hard drive which draws about 13.6W. In addition, the Seagate new 1TB hard drive has just 4 platters which results in cool operating temperatures and low-power consumption helping longer hard disk life with less chances for wear and tear.


Seagate claims that the Barracuda 7200.11 1TB hard drive unit is a newly designed product optimised for demanding business-critical and nearline enterprise storage environments including: networked and tiered storage solutions, reference/compliance storage, disc-to-disc backup and restore, archiving solutions, rich media content storage and collaboration.

Company also claims that the new Barracuda 7200.11 hard drive also boosts reliability with an unrecoverable error rate that is 10 times better than desktop class drives and a 1.2 million hour Mean Time Between Failure at full 24 x 7 data


Read More:
Seagate 1 TB HDD
and Hitechi 1 TB HDD

Also Its Interesting: hitachi

Thursday, November 1, 2007

Software Testing:Key Concepts

Taxonomy
There is a plethora of testing methods and testing techniques, serving multiple purposes in different life cycle phases. Classified by purpose, software testing can be divided into: correctness testing, performance testing, reliability testing and security testing. Classified by life-cycle phase, software testing can be classified into the following categories: requirements phase testing, design phase testing, program phase testing, evaluating test results, installation phase testing, acceptance testing and maintenance testing. By scope, software testing can be categorized as follows: unit testing, component testing, integration testing, and system testing
Correctness testing
Correctness is the minimum requirement of software, the essential purpose of testing. Correctness testing will need some type of oracle, to tell the right behavior from the wrong one. The tester may or may not know the inside details of the software module under test, e.g. control flow, data flow, etc. Therefore, either a white-box point of view or black-box point of view can be taken in testing software. We must note that the black-box and white-box ideas are not limited in correctness testing only.

Black-box testing
The black-box approach is a testing method in which test data are derived from the specified functional requirements without regard to the final program structure. [Perry90] It is also termed data-driven, input/output driven [Myers79], or requirements-based [Hetzel88] testing. Because only the functionality of the software module is of concern, black-box testing also mainly refers to functional testing -- a testing method emphasized on executing the functions and examination of their input and output data. [Howden87] The tester treats the software under test as a black box -- only the inputs, outputs and specification are visible, and the functionality is determined by observing the outputs to corresponding inputs. In testing, various inputs are exercised and the outputs are compared against specification to validate the correctness. All test cases are derived from the specification. No implementation details of the code are considered.

It is obvious that the more we have covered in the input space, the more problems we will find and therefore we will be more confident about the quality of the software. Ideally we would be tempted to exhaustively test the input space. But as stated above, exhaustively testing the combinations of valid inputs will be impossible for most of the programs, let alone considering invalid inputs, timing, sequence, and resource variables. Combinatorial explosion is the major roadblock in functional testing. To make things worse, we can never be sure whether the specification is either correct or complete. Due to limitations of the language used in the specifications (usually natural language), ambiguity is often inevitable. Even if we use some type of formal or restricted language, we may still fail to write down all the possible cases in the specification. Sometimes, the specification itself becomes an intractable problem: it is not possible to specify precisely every situation that can be encountered using limited words. And people can seldom specify clearly what they want -- they usually can tell whether a prototype is, or is not, what they want after they have been finished. Specification problems contributes approximately 30 percent of all bugs in software. [Beizer95]

The research in black-box testing mainly focuses on how to maximize the effectiveness of testing with minimum cost, usually the number of test cases. It is not possible to exhaust the input space, but it is possible to exhaustively test a subset of the input space. Partitioning is one of the common techniques. If we have partitioned the input space and assume all the input values in a partition is equivalent, then we only need to test one representative value in each partition to sufficiently cover the whole input space. Domain testing [Beizer95] partitions the input domain into regions, and consider the input values in each domain an equivalent class. Domains can be exhaustively tested and covered by selecting a representative value(s) in each domain. Boundary values are of special interest. Experience shows that test cases that explore boundary conditions have a higher payoff than test cases that do not. Boundary value analysis [Myers79] requires one or more boundary values selected as representative test cases. The difficulties with domain testing are that incorrect domain definitions in the specification can not be efficiently discovered.

Good partitioning requires knowledge of the software structure. A good testing plan will not only contain black-box testing, but also white-box approaches, and combinations of the two.

White-box testing
Contrary to black-box testing, software is viewed as a white-box, or glass-box in white-box testing, as the structure and flow of the software under test are visible to the tester. Testing plans are made according to the details of the software implementation, such as programming language, logic, and styles. Test cases are derived from the program structure. White-box testing is also called glass-box testing, logic-driven testing [Myers79] or design-based testing [Hetzel88].

There are many techniques available in white-box testing, because the problem of intractability is eased by specific knowledge and attention on the structure of the software under test. The intention of exhausting some aspect of the software is still strong in white-box testing, and some degree of exhaustion can be achieved, such as executing each line of code at least once (statement coverage), traverse every branch statements (branch coverage), or cover all the possible combinations of true and false condition predicates (Multiple condition coverage). [Parrington89]

Control-flow testing, loop testing, and data-flow testing, all maps the corresponding flow structure of the software into a directed graph. Test cases are carefully selected based on the criterion that all the nodes or paths are covered or traversed at least once. By doing so we may discover unnecessary "dead" code -- code that is of no use, or never get executed at all, which can not be discovered by functional testing.

In mutation testing, the original program code is perturbed and many mutated programs are created, each contains one fault. Each faulty version of the program is called a mutant. Test data are selected based on the effectiveness of failing the mutants. The more mutants a test case can kill, the better the test case is considered. The problem with mutation testing is that it is too computationally expensive to use. The boundary between black-box approach and white-box approach is not clear-cut. Many testing strategies mentioned above, may not be safely classified into black-box testing or white-box testing. It is also true for transaction-flow testing, syntax testing, finite-state testing, and many other testing strategies not discussed in this text. One reason is that all the above techniques will need some knowledge of the specification of the software under test. Another reason is that the idea of specification itself is broad -- it may contain any requirement including the structure, programming language, and programming style as part of the specification content.

We may be reluctant to consider random testing as a testing technique. The test case selection is simple and straightforward: they are randomly chosen. Study in [Duran84] indicates that random testing is more cost effective for many programs. Some very subtle errors can be discovered with low cost. And it is also not inferior in coverage than other carefully designed testing techniques. One can also obtain reliability estimate using random testing results based on operational profiles. Effectively combining random testing with other testing techniques may yield more powerful and cost-effective testing strategies.

Performance testing
Not all software systems have specifications on performance explicitly. But every system will have implicit performance requirements. The software should not take infinite time or infinite resource to execute. "Performance bugs" sometimes are used to refer to those design problems in software that cause the system performance to degrade.

Performance has always been a great concern and a driving force of computer evolution. Performance evaluation of a software system usually includes: resource usage, throughput, stimulus-response time and queue lengths detailing the average or maximum number of tasks waiting to be serviced by selected resources. Typical resources that need to be considered include network bandwidth requirements, CPU cycles, disk space, disk access operations, and memory usage [Smith90]. The goal of performance testing can be performance bottleneck identification, performance comparison and evaluation, etc. The typical method of doing performance testing is using a benchmark -- a program, workload or trace designed to be representative of the typical system usage. [Vokolos98]

Reliability testing
Software reliability refers to the probability of failure-free operation of a system. It is related to many aspects of software, including the testing process. Directly estimating software reliability by quantifying its related factors can be difficult. Testing is an effective sampling method to measure software reliability. Guided by the operational profile, software testing (usually black-box testing) can be used to obtain failure data, and an estimation model can be further used to analyze the data to estimate the present reliability and predict future reliability. Therefore, based on the estimation, the developers can decide whether to release the software, and the users can decide whether to adopt and use the software. Risk of using software can also be assessed based on reliability information. [Hamlet94] advocates that the primary goal of testing should be to measure the dependability of tested software.

There is agreement on the intuitive meaning of dependable software: it does not fail in unexpected or catastrophic ways. [Hamlet94] Robustness testing and stress testing are variances of reliability testing based on this simple criterion.

The robustness of a software component is the degree to which it can function correctly in the presence of exceptional inputs or stressful environmental conditions. [IEEE90] Robustness testing differs with correctness testing in the sense that the functional correctness of the software is not of concern. It only watches for robustness problems such as machine crashes, process hangs or abnormal termination. The oracle is relatively simple, therefore robustness testing can be made more portable and scalable than correctness testing. This research has drawn more and more interests recently, most of which uses commercial operating systems as their target, such as the work in [Koopman97] [Kropp98] [Ghosh98] [Devale99] [Koopman99].

Stress testing, or load testing, is often used to test the whole system rather than the software alone. In such tests the software or system are exercised with or beyond the specified limits. Typical stress includes resource exhaustion, bursts of activities, and sustained high loads.

Security testing
Software quality, reliability and security are tightly coupled. Flaws in software can be exploited by intruders to open security holes. With the development of the Internet, software security problems are becoming even more severe.

Many critical software applications and services have integrated security measures against malicious attacks. The purpose of security testing of these systems include identifying and removing software flaws that may potentially lead to security violations, and validating the effectiveness of security measures. Simulated security attacks can be performed to find vulnerabilities.

Wednesday, October 31, 2007

IT News:Democratising IT

A New Model For PC Penetration
INDIA has emerged as a global leader in the advance of information technology. Yet the country faces a fundamental challenge — building on its successes by enabling greater access to technology for its people. This will drive expanded economic growth and opportunity. Less than 3% of Indians own a personal computer — compared to nearly 8% of Chinese, almost 14% of Brazilians and more than 15% of Russians. Despite the very low penetration of computers in India, the impact has been profound. India is home to three of the world’s 10 biggest IT firms — Tata, Infosys, and Wipro, and already generates nearly $40 billion in revenues from its IT software and services sector. Nasscom forecasts this figure to grow by nearly 27% next year. It must be recognised that the benefits of broader IT use and deeper Internet access are substantial, and will be a catalyst for — not a result of — economic growth and modernisation. India is already benefiting from e-governance initiatives that deliver real-time tallying of results of the world’s largest elections and from technology-driven distance learning that brings the world’s educational resources to students without regard to location or economic background. But cost has been a major roadblock for broader technology adoption in India. Reducing taxes and tariffs is essential to facilitating broader access to technology and driving growth in the technology sectors. Global hardware exports are 43% of Chinese exports versus only 2.3% for India. India is clearly missing out on a big opportunity. If it doesn’t act soon, investments will go further into China and emerging countries such as Vietnam, instead of India.
Consider also that, in India, a typical desktop computer costs 44% of the average Indian’s annual wage. Brazil’s experience in supporting technology adoption is particularly instructive. Since reducing taxes on computer purchases two years ago, the PC market tripled, and more than two million families bought their first PC, making Brazil the world’s fourth-largest PC market. What was more important was the multiplier effect this had on the economy. Thousands of IT industry jobs were created and government revenue from the IT sector increased by 50%. But cost isn’t the only barrier. IT complexity will also threaten access to technology while increasing its cost and environmental impact. We are all members of what we at Dell call the ReGeneration — a new global movement concerned with the regeneration of not just our businesses but also our planet. Environmental protection efforts are improving, as reflected in the Nobel Prize jointly awarded to former US vice-president Al Gore and the Intergovernmental Panel on Climate Change headed by Rajendra Pachauri. And technology is an important part of these efforts. The future will bring even more benefits.
By 2020 microprocessors will run one thousand times as many computations per second as they do today. That will mean enormous gains in productivity and efficiency, giving people unimaginable power to access, organise, and transform information. Indian citizens will more fully benefit from this progress as government and industry leaders strengthen their cooperation. This will help create the conditions in which IT can flourish and reach all people, businesses, and institutions across the country. India plays a pivotal role in global IT. Technology users in the western world benefit every day from the work of bright, talented Indian employees and their constant innovation. But more than serving as the world’s software writer or back office, India is harnessing the productivity, efficiency, and innovation benefits of IT as a foundation for global economic competitiveness. I see industry working, with great commitment, with India’s government to build on this progress, and to help further democratize access to technology, so that more Indian citizens enjoy even more of technology’s benefits with an ever-decreasing impact on our environment. That is our shared responsibility. By harnessing these forces — the democratization and simplification of technology, we can make a positive impact not just on our economies, but also our planet.

(Michael Dell)

Sunday, October 28, 2007

Software Testing:Introduction

Introduction
Software Testing is the process of executing a program or system with the intent of finding errors. [Myers79] Or, it involves any activity aimed at evaluating an attribute or capability of a program or system and determining that it meets its required results. [Hetzel88] Software is not unlike other physical processes where inputs are received and outputs are produced. Where software differs is in the manner in which it fails. Most physical systems fail in a fixed (and reasonably small) set of ways. By contrast, software can fail in many bizarre ways. Detecting all of the different failure modes for software is generally infeasible. [Rstcorp]

Unlike most physical systems, most of the defects in software are design errors, not manufacturing defects. Software does not suffer from corrosion, wear-and-tear -- generally it will not change until upgrades, or until obsolescence. So once the software is shipped, the design defects -- or bugs -- will be buried in and remain latent until activation.

Software bugs will almost always exist in any software module with moderate size: not because programmers are careless or irresponsible, but because the complexity of software is generally intractable -- and humans have only limited ability to manage complexity. It is also true that for any complex systems, design defects can never be completely ruled out.

Discovering the design defects in software, is equally difficult, for the same reason of complexity. Because software and any digital systems are not continuous, testing boundary values are not sufficient to guarantee correctness. All the possible values need to be tested and verified, but complete testing is infeasible. Exhaustively testing a simple program to add only two integer inputs of 32-bits (yielding 2^64 distinct test cases) would take hundreds of years, even if tests were performed at a rate of thousands per second. Obviously, for a realistic software module, the complexity can be far beyond the example mentioned here. If inputs from the real world are involved, the problem will get worse, because timing and unpredictable environmental effects and human interactions are all possible input parameters under consideration.

A further complication has to do with the dynamic nature of programs. If a failure occurs during preliminary testing and the code is changed, the software may now work for a test case that it didn't work for previously. But its behavior on pre-error test cases that it passed before can no longer be guaranteed. To account for this possibility, testing should be restarted. The expense of doing this is often prohibitive. [Rstcorp]

An interesting analogy parallels the difficulty in software testing with the pesticide, known as the Pesticide Paradox [Beizer90]: Every method you use to prevent or find bugs leaves a residue of subtler bugs against which those methods are ineffectual. But this alone will not guarantee to make the software better, because the Complexity Barrier [Beizer90] principle states: Software complexity(and therefore that of bugs) grows to the limits of our ability to manage that complexity. By eliminating the (previous) easy bugs you allowed another escalation of features and complexity, but his time you have subtler bugs to face, just to retain the reliability you had before. Society seems to be unwilling to limit complexity because we all want that extra bell, whistle, and feature interaction. Thus, our users always push us to the complexity barrier and how close we can approach that barrier is largely determined by the strength of the techniques we can wield against ever more complex and subtle bugs. [Beizer90]

Regardless of the limitations, testing is an integral part in software development. It is broadly deployed in every phase in the software development cycle. Typically, more than 50% percent of the development time is spent in testing. Testing is usually performed for the following purposes:
To improve quality.
As computers and software are used in critical applications, the outcome of a bug can be severe. Bugs can cause huge losses. Bugs in critical systems have caused airplane crashes, allowed space shuttle missions to go awry, halted trading on the stock market, and worse. Bugs can kill. Bugs can cause disasters. The so-called year 2000 (Y2K) bug has given birth to a cottage industry of consultants and programming tools dedicated to making sure the modern world doesn't come to a screeching halt on the first day of the next century. [Bugs] In a computerized embedded world, the quality and reliability of software is a matter of life and death.

Quality means the conformance to the specified design requirement. Being correct, the minimum requirement of quality, means performing as required under specified circumstances. Debugging, a narrow view of software testing, is performed heavily to find out design defects by the programmer. The imperfection of human nature makes it almost impossible to make a moderately complex program correct the first time. Finding the problems and get them fixed [Kaner93], is the purpose of debugging in programming phase.

For Verification & Validation (V&V)
Just as topic Verification and Validation indicated, another important purpose of testing is verification and validation (V&V). Testing can serve as metrics. It is heavily used as a tool in the V&V process. Testers can make claims based on interpretations of the testing results, which either the product works under certain situations, or it does not work. We can also compare the quality among different products under the same specification, based on results from the same test.

We can not test quality directly, but we can test related factors to make quality visible. Quality has three sets of factors -- functionality, engineering, and adaptability. These three sets of factors can be thought of as dimensions in the software quality space. Each dimension may be broken down into its component factors and considerations at successively lower levels of detail. Table 1 illustrates some of the most frequently cited quality considerations.
Good testing provides measures for all relevant factors. The importance of any particular factor varies from application to application. Any system where human lives are at stake must place extreme emphasis on reliability and integrity. In the typical business system usability and maintainability are the key factors, while for a one-time scientific program neither may be significant. Our testing, to be fully effective, must be geared to measuring each relevant factor and thus forcing quality to become tangible and visible. [Hetzel88]

Tests with the purpose of validating the product works are named clean tests, or positive tests. The drawbacks are that it can only validate that the software works for the specified test cases. A finite number of tests can not validate that the software works for all situations. On the contrary, only one failed test is sufficient enough to show that the software does not work. Dirty tests, or negative tests, refers to the tests aiming at breaking the software, or showing that it does not work. A piece of software must have sufficient exception handling capabilities to survive a significant level of dirty tests.

A testable design is a design that can be easily validated, falsified and maintained. Because testing is a rigorous effort and requires significant time and cost, design for testability is also an important design rule for software development.

For reliability estimation
Software reliability has important relations with many aspects of software, including the structure, and the amount of testing it has been subjected to. Based on an operational profile (an estimate of the relative frequency of use of various inputs to the program [Lyu95]), testing can serve as a statistical sampling method to gain failure data for reliability estimation.

Software testing is not mature. It still remains an art, because we still cannot make it a science. We are still using the same testing techniques invented 20-30 years ago, some of which are crafted methods or heuristics rather than good engineering methods. Software testing can be costly, but not testing software is even more expensive, especially in places that human lives are at stake. Solving the software-testing problem is no easier than solving the Turing halting problem. We can never be sure that a piece of software is correct. We can never be sure that the specifications are correct. No verification system can verify every correct program. We can never be certain that a verification system is correct either.

Saturday, October 20, 2007

IT News:Cognizant pips Infy to acquire marketRx for $135 m

Cognizant pips Infy to acquire marketRx for $135 m
COGNIZANT Technology Solutions, an IT and BPO services company, said it will pay $135 million in cash to acquire New Jersey-based marketRx, a provider of analytics and related software services to life sciences companies. It’s the biggest acquisition made by Cognizant so far, and among the biggest in high-end business process outsourcing space recently.
In a deal structured on Thursday night, Cognizant is believed to have pipped IT services giant Infosys Technologies in bagging the deal. Earlier, ET reported that a host of suitors, including biggies Infosys and Wipro, had evinced interest in acquiring marketRx, one of the largest, independent offshore KPO businesses. Two investment banks, Mumbai-based Avendus and US-based William Blair, advised marketRx in sealing the deal.
This is the second high-profile dealmaking in the KPO space, after WNS snapped up Bangalore headquartered Marketics for $65 million earlier this year. Sources said while Infosys’ valuation of the marketRx was higher, Cognizant offered upfront cash pay out. It is believed that Infy’s offer had a substantial chunk of earnings payout, which essentially means a combination of upfront money and the remaining to be based on future performance.
The Nasdaq-listed Cognizant with predominant operations in India said the acquisition would help strengthen its analytics unit and offer more services to life sciences industry. The deal is expected to close in fourth quarter of 2007, and would be funded from its cash reserves. marketRx with per employee revenue of about $100,000 is projected to report revenues of over $40 million in 2007. It has 430 people, with 260 in Gurgaon, 160 in the US (four locations), and 10 in London.
Cognizant is expected to cross 55,000 people by end of this year and with revenue guidance of $2.11 billion in 2007. It’s manpower addition this year is expected to be over 16,000, 40 times the strength of marketRx. Cognizant president R Chandrasekaran said: “It is a ‘tuck-under’ acquisition that is consistent with our acquisition strategy of selectively acquiring businesses that complement or enhance our business model and value to our customers.
Cognizant president and CEO Francisco D’- Souza said: “This acquisition expands our capabilities in the analytics segment and broadens our service offerings for the life sciences industry while providing strong synergies with our existing business intelligence/data warehousing and CRM (customer relationship management) services.”
marketRx has a proven global delivery model for analytics, deep domain knowledge and proprietary analytics software platform, he said. “We expect to leverage these assets to establish a pre-eminent position in the fast-growing analytics market both in life sciences and other industries,” Mr D’Souza said in a statement.
marketRx president & CEO Jaswinder (Jassi) Chadha said: “The combination of our market leading position in the life sciences analytics segment and Cognizant’s strengths as a top global services player will allow us to expand our relationships with our life sciences clients by providing them with a broader range of outsourced services, and conversely enables us to extend our capabilities to other vertical markets.”

Friday, October 19, 2007

Mainframe:Mainframe Server Software Architectures

Mainframe Server Software Architectures

Purpose and OriginSince 1994 mainframes have been combined with distributed architectures to provide massive storage and to improve system security, flexibility, scalability, and reusability in the client/server design. In a mainframe server software architecture, mainframes are integrated as servers and data warehouses in a client/server environment. Additionally, mainframes still excel at simple transaction-oriented data processing to automate repetitive business tasks such as accounts receivable, accounts payable, general ledger, credit account management, and payroll. Siwolp and Edelstein provide details on mainframe server software architectures see [Siwolp 95, Edelstein 94].

Technical DetailWhile client/server systems are suited for rapid application deployment and distributed processing, mainframes are efficient at online transactional processing, mass storage, centralized software distribution, and data warehousing [Data 96]. Data warehousing is information (usually in summary form) extracted from an operational database by data mining (drilling down into the information through a series of related queries). The purpose of data warehousing and data mining is to provide executive decision makers with data analysis information (such as trends and correlated results) to make and improve business decisions.
a mainframe in a three tier client/server architecture. The combination of mainframe horsepower as a server in a client/server distributed architecture results in a very effective and efficient system. Mainframe vendors are now providing standard communications and programming interfaces that make it easy to integrate mainframes as servers in a client/server architecture. Using mainframes as servers in a client/server distributed architecture provides a more modular system design, and provides the benefits of the client/server technology.

Using mainframes as servers in a client/server architecture also enables the distribution of workload between major data centers and provides disaster protection and recovery by backing up large volumes of data at disparate locations. The current model favors "thin" clients (contains primarily user interface services) with very powerful servers that do most of the extensive application and data processing, such as in a two tier architecture. In a three tier client/server architecture, process management (business rule execution) could be off-loaded to another server.


Usage ConsiderationsMainframes are preferred for big batch jobs and storing massive amounts of vital data. They are mainly used in the banking industry, public utility systems, and for information services. Mainframes also have tools for monitoring performance of the entire system, including networks and applications not available today on UNIX servers [Siwolp 95].
New mainframes are providing parallel systems (unlike older bipolar machines) and use complementary metal-oxide semiconductor (CMOS) microprocessors, rather than emitter-coupler logic (ECL) processors. Because CMOS processors are packed more densely than ECL microprocessors, mainframes can be built much smaller and are not so power-hungry. They can also be cooled with air instead of water [Siwolp 95].

While it appeared in the early 1990s that mainframes were being replaced by client/server architectures, they are making a comeback. Some mainframe vendors have seen as much as a 66% jump in mainframe shipments in 1995 due to the new mainframe server software architecture [Siwolp 95].

Given the cost of a mainframe compared to other servers, UNIX workstations and personal computers (PCs), it is not likely that mainframes would replace all other servers in a distributed two or three tier client/server architecture.


MaturityMainframe technology has been well known for decades. The new improved models have been fielded since 1994. The new mainframe server software architecture provides the distributed client/server design with massive storage and improved security capability. New technologies of data warehousing and data mining data allow extraction of information from the operational mainframe server's massive storage to provide businesses with timely data to improve overall business effectiveness. For example, stores such as Wal-Mart found that by placing certain products in close proximity within the store, both products sold at higher rates than when not collocated.1

Costs and LimitationsBy themselves, mainframes are not appropriate mechanisms to support graphical user interfaces. Nor can they easily accommodate increases in the number of user applications or rapidly changing user needs [Edelstein 94].

AlternativesUsing a client/server architecture without a mainframe server is a possible alternative. When requirements for high volume (greater than 50 gigabit), batch type processing, security, and mass storage are minimal, three tier or two tier architectures without a mainframe server may be viable alternatives. Other possible alternatives to using mainframes in a client/server distributed environment are using parallel processing software architecture or using a database machine.

Complementary TechnologiesA complementary technology to mainframe server software architectures is open systems . This is because movement in the industry towards interoperable heterogeneous software programs and operating systems will continue to increase reuse of mainframe technology and provide potentially new applications for mainframe capabilities.

Sunday, October 14, 2007

How To: Burn a CD using Windows XP

You can burn a CD using Windows XP. No special CD-burning software required. All it needs is a CD-R or CD-RW disk, a machine running Windows XP and a CD-RW disk drive.

Step 1
Insert a blank CD-R or CD-RW disk into the CD-RW drive. A pop-up dialog box should appear after Windows loads the CD. (No pop-up dialog box? Open "My Computer" from your desktop and double-click on your CD-RW drive icon.)

Step 2
Double-click the option, "Open writable CD folder using Windows Explorer." You will see the files that are currently on the CD in your CD-RW drive. If you inserted a blank CD, you will see nothing.

Step 3
Click on the "Start" menu, and then "My Computer." (No "My Computer" on your start menu? It is likely you have the Windows Classic Start Menu enabled, and you will have to double-click "My Computer" on the desktop instead.) Navigate to the files that you wish to burn onto the CD.

Step 4
Single-click on the first file you wish to burn. Hold down the "Control" key and continue to single-click on other desired files until you have selected them all. Let go of the "Control" key. All your files should remain selected and appear blue. Right-click on any file and choose "Copy."

Step 5
Go back to the open window that displays the contents of your CD drive. Right-click in the white space and choose "Paste." The pasted icons will appear washed out, and they will have little black arrows on them indicating your next step.

Step 6
Choose "Write these files to CD" on the left-hand menu bar under "CD Writing Tasks." A wizard will start. First, name your CD. You can use up to 16 characters. After typing a name, click "Next." This will start the burning process. When the CD is finished burning, the CD will eject itself.

Step 7
Follow the remaining wizard prompts. It will ask if you want to burn the same files to another CD. If so, click "Yes, write these files to another CD." If not, click "Finish." You're done.

Tips & Warnings
Make sure to test your newly burned CD—try to open a few files to ensure that the process was done correctly.

Friday, October 12, 2007

Networking:Client/Server

Client/Server Software Architectures--An Overview

Purpose and Origin
The term client/server was first used in the 1980s in reference to personal computers (PCs) on a network. The actual client/server model started gaining acceptance in the late 1980s. The client/server software architecture is a versatile, message-based and modular infrastructure that is intended to improve usability, flexibility, interoperability, and scalability as compared to centralized, mainframe, time sharing computing.

A client is defined as a requester of services and a server is defined as the provider of services. A single machine can be both a client and a server depending on the software configuration. For details on client/server software architectures see Schussel and Edelstein [Schussel 96, Edelstein 94].

This technology description provides a summary of some common client/server architectures and, for completeness, also summarizes mainframe and file sharing architectures. Detailed descriptions for many of the individual architectures are provided elsewhere in the document.

Technical Detail
Mainframe architecture (not a client/server architecture). With mainframe software architectures all intelligence is within the central host computer. Users interact with the host through a terminal that captures keystrokes and sends that information to the host. Mainframe software architectures are not tied to a hardware platform. User interaction can be done using PCs and UNIX workstations. A limitation of mainframe software architectures is that they do not easily support graphical user interfaces (see Graphical User Interface Builders) or access to multiple databases from geographically dispersed sites. In the last few years, mainframes have found a new use as a server in distributed client/server architectures (see Client/Server Software Architectures) [Edelstein 94].

File sharing architecture (not a client/server architecture). The original PC networks were based on file sharing architectures, where the server downloads files from the shared location to the desktop environment. The requested user job is then run (including logic and data) in the desktop environment. File sharing architectures work if shared usage is low, update contention is low, and the volume of data to be transferred is low. In the 1990s, PC LAN (local area network) computing changed because the capacity of the file sharing was strained as the number of online user grew (it can only satisfy about 12 users simultaneously) and graphical user interfaces (GUIs) became popular (making mainframe and terminal displays appear out of date). PCs are now being used in client/server architectures [Schussel 96, Edelstein 94].

Client/server architecture. As a result of the limitations of file sharing architectures, the client/server architecture emerged. This approach introduced a database server to replace the file server. Using a relational database management system (DBMS), user queries could be answered directly. The client/server architecture reduced network traffic by providing a query response rather than total file transfer. It improves multi-user updating through a GUI front end to a shared database. In client/server architectures, Remote Procedure Calls (RPCs) or standard query language (SQL) statements are typically used to communicate between the client and server [Schussel 96, Edelstein 94].

The remainder of this write-up provides examples of client/server architectures.

Two tier architectures. With two tier client/server architectures (see Two Tier Software Architectures), the user system interface is usually located in the user's desktop environment and the database management services are usually in a server that is a more powerful machine that services many clients. Processing management is split between the user system interface environment and the database management server environment. The database management server provides stored procedures and triggers. There are a number of software vendors that provide tools to simplify development of applications for the two tier client/server architecture [Schussel 96, Edelstein 94].

The two tier client/server architecture is a good solution for distributed computing when work groups are defined as a dozen to 100 people interacting on a LAN simultaneously. It does have a number of limitations. When the number of users exceeds 100, performance begins to deteriorate. This limitation is a result of the server maintaining a connection via "keep-alive" messages with each client, even when no work is being done. A second limitation of the two tier architecture is that implementation of processing management services using vendor proprietary database procedures restricts flexibility and choice of DBMS for applications. Finally, current implementations of the two tier architecture provide limited flexibility in moving (repartitioning) program functionality from one server to another without manually regenerating procedural code. [Schussel 96, Edelstein 94].

Three tier architectures. The three tier architecture (see Three Tier Software Architectures) (also referred to as the multi-tier architecture) emerged to overcome the limitations of the two tier architecture. In the three tier architecture, a middle tier was added between the user system interface client environment and the database management server environment. There are a variety of ways of implementing this middle tier, such as transaction processing monitors, message servers, or application servers. The middle tier can perform queuing, application execution, and database staging. For example, if the middle tier provides queuing, the client can deliver its request to the middle layer and disengage because the middle tier will access the data and return the answer to the client. In addition the middle layer adds scheduling and prioritization for work in progress. The three tier client/server architecture has been shown to improve performance for groups with a large number of users (in the thousands) and improves flexibility when compared to the two tier approach. Flexibility in partitioning can be a simple as "dragging and dropping" application code modules onto different computers in some three tier architectures. A limitation with three tier architectures is that the development environment is reportedly more difficult to use than the visually-oriented development of two tier applications [Schussel 96, Edelstein 94]. Recently, mainframes have found a new use as servers in three tier architectures (see Mainframe Server Software Architectures).

Three tier architecture with transaction processing monitor technology. The most basic type of three tier architecture has a middle layer consisting of Transaction Processing (TP) monitor technology (see Transaction Processing Monitor Technology). The TP monitor technology is a type of message queuing, transaction scheduling, and prioritization service where the client connects to the TP monitor (middle tier) instead of the database server. The transaction is accepted by the monitor, which queues it and then takes responsibility for managing it to completion, thus freeing up the client. When the capability is provided by third party middleware vendors it is referred to as "TP Heavy" because it can service thousands of users. When it is embedded in the DBMS (and could be considered a two tier architecture), it is referred to as "TP Lite" because experience has shown performance degradation when over 100 clients are connected. TP monitor technology also provides

* the ability to update multiple different DBMSs in a single transaction
* connectivity to a variety of data sources including flat files, non-relational DBMS, and the mainframe
* the ability to attach priorities to transactions
* robust security

Using a three tier client/server architecture with TP monitor technology results in an environment that is considerably more scalable than a two tier architecture with direct client to server connection. For systems with thousands of users, TP monitor technology (not embedded in the DBMS) has been reported as one of the most effective solutions. A limitation to TP monitor technology is that the implementation code is usually written in a lower level language (such as COBOL), and not yet widely available in the popular visual toolsets [Schussel 96].

Three tier with message server. Messaging is another way to implement three tier architectures. Messages are prioritized and processed asynchronously. Messages consist of headers that contain priority information, and the address and identification number. The message server connects to the relational DBMS and other data sources. The difference between TP monitor technology and message server is that the message server architecture focuses on intelligent messages, whereas the TP Monitor environment has the intelligence in the monitor, and treats transactions as dumb data packets. Messaging systems are good solutions for wireless infrastructures [Schussel 96].

Three tier with an application server. The three tier application server architecture allocates the main body of an application to run on a shared host rather than in the user system interface client environment. The application server does not drive the GUIs; rather it shares business logic, computations, and a data retrieval engine. Advantages are that with less software on the client there is less security to worry about, applications are more scalable, and support and installation costs are less on a single server than maintaining each on a desktop client [Schussel 96]. The application server design should be used when security, scalability, and cost are major considerations [Schussel 96].

Three tier with an ORB architecture. Currently industry is working on developing standards to improve interoperability and determine what the common Object Request Broker (ORB) will be. Developing client/server systems using technologies that support distributed objects holds great pomise, as these technologies support interoperability across languages and platforms, as well as enhancing maintainability and adaptability of the system. There are currently two prominent distributed object technologies:

* Common Object Request Broker Architecture (CORBA)
* COM/DCOM (see Component Object Model (COM), DCOM, and Related Capabilities).

Industry is working on standards to improve interoperability between CORBA and COM/DCOM. The Object Management Group (OMG) has developed a mapping between CORBA and COM/DCOM that is supported by several products [OMG 96].

Distributed/collaborative enterprise architecture. The distributed/collaborative enterprise architecture emerged in 1993 (see Distributed/Collaborative Enterprise Architectures). This software architecture is based on Object Request Broker (ORB) technology, but goes further than the Common Object Request Broker Architecture (CORBA) by using shared, reusable business models (not just objects) on an enterprise-wide scale. The benefit of this architectural approach is that standardized business object models and distributed object computing are combined to give an organization flexibility to improve effectiveness organizationally, operationally, and technologically. An enterprise is defined here as a system comprised of multiple business systems or subsystems. Distributed/collaborative enterprise architectures are limited by a lack of commercially-available object orientation analysis and design method tools that focus on applications [Shelton 93, Adler 95].

Usage Considerations
Client/server architectures are being used throughout industry and the military. They provide a versatile infrastructure that supports insertion of new technology more readily than earlier software designs.

Maturity
Client/server software architectures have been in use since the late 1980s. See individual technology descriptions for more detail.

Costs and Limitations
There a number of tradeoffs that must be made to select the appropriate client/server architecture. These include business strategic planning, and potential growth on the number of users, cost, and the homogeneity of the current and future computational environment.

Dependencies
If a distributed object approach is employed, then the CORBA and/or COM/DCOM technologies should be considered (see Common Object Request Broker Architecture and Component Object Model (COM), DCOM, and Related Capabilities).

Alternatives
Alternatives to client/server architectures would be mainframe or file sharing architectures.

Complementary Technologies
Complementary technologies for client/server architectures are computer-aided software engineering (CASE) tools because they facilitate client/server architectural development, and open systems (see COTS and Open Systems--An Overview) because they facilitate the development of architectures that improve scalability and flexibility.

Thursday, October 11, 2007

IT News:Talent pool may raise biz for IT product cos


Talent pool may raise biz for IT product cos

WITH India becoming the global IT services hub, leading product companies including Oracle, Sun Micro systems, IBM and Microsoft are sharpening their focus on the education sector to promote their technologies and bolster revenues. These companies are actively forging ties with educational and training institutes to develop a ready-touse talent pool. The partnerships could indirectly bring in more business, say analysts.
The programmers are designed to help companies position their products in the global marketplace. Availability of talent pool could be a differentiating factor in a closely-contested deal. In a multi-million dollar deal, when a client is selecting a product partner, they will look at market capabilities, mainly the number of professionals who have trained on the product technologies. “With more and more businesses outsourcing their services to India, it is important for major product development companies to create a large tech-savvy resource pool here,” said Gartner principal analyst Kamlesh Bhatia.
IBM, for instance, has imparted training on open standards-based technologies to more than 80,000 students across 745 colleges in India in 2006. “As part of IBM Academic Initiative, we offer workshops and certification programmes on various technologies. The aim is to develop strategic linkages with universities and colleges and to assist them in developing talent pool,” says IBM programme director Amol Mahamuni.
Microsoft India has also partnered with the Board for Information Technology Education Standards (BITES) in Karnataka to address the training needs of students in BITES member institutes.
TALENT CONTEST
IT cos are tying up with educational & training institutes to develop ready-to-use talent pool Availability of talent pool could be a differentiating factor in a closely-contested deal IBM has imparted training on open standards-based technologies to 80,000 students across 745 colleges in India

What is web content management system?

Web content management systems are often used for storing, controlling, versioning, and publishing industry-specific documentation such as news articles, operators' manuals, technical manuals, sales guides, and marketing brochures. A content management system may support the following features:

* Import and creation of documents and multimedia material
* Identification of all key users and their content management roles
* The ability to assign roles and responsibilities to different content categories or types.
* Definition of the content workflow tasks, often coupled with event messaging so that content managers are alerted to changes in content.
* The ability to track and manage multiple versions of a single instance of content.
* The ability to publish the content to a repository to support access to the content. Increasingly, the repository is an inherent part of the system, and incorporates enterprise search and retrieval.
* Some content management systems allow the textual aspect of content to be separated to some extent from formatting. For example the CMS may automatically set default color, fonts, or layouts.

Wednesday, October 10, 2007

CMS: Content Management System

A Content Management System (CMS) is a software system used for content management. Content management systems are deployed primarily for interactive use by a potentially large number of contributors.Other related forms of content management are listed below.

The content managed includes computer files, image media, audio files, electronic documents and web content. The idea behind a CMS is to make these files available inter-office, as well as over the web. A Content Management System would most often be used as an archive as well. Many companies use a CMS to store files in a non-proprietary form. Companies use a CMS to share files with ease, as most systems use server-based software, even further broadening file availability. As shown below, many Content Management Systems include a feature for Web Content, and some have a feature for a "workflow process."

"Work flow" is the idea of moving an electronic document along for either approval, or for adding content. Some Content Management Systems will easily facilitate this process with email notification, and automated routing. This is ideally a collaborative creation of documents. A CMS facilitates the organization, control, and publication of a large body of documents and other content, such as images and multimedia resources.

A web content management system is a content management system with additional features to ease the tasks required to publish web content to web sites.

Tuesday, October 9, 2007

PHP : What is smarty?

Smarty is a web template system written in PHP. Smarty is primarily promoted as a tool for separation of concerns, which is a common design strategy for certain kinds of applications.

Smarty generates web content by the placement of special Smarty tags within a document. These tags are processed and substituted with other code.

Tags are directives for Smarty that are enclosed by template delimiters. These directives can be variables, denoted by a dollar sign ($), functions, or logical or control flow statements. Smarty allows PHP programmers to define functions that can be accessed using Smarty tags.

Smarty is intended to simplify compartmentalization, allowing the presentation of a web page to change separately from the back-end. Ideally, this eases the costs and efforts associated with software maintenance. Under successful application of this development strategy, designers are shielded from the back-end coding, and PHP programmers are shielded from the presentation coding.

Smarty supports several high-level template programming features, including:

* regular expressions
* Control flow statements, foreach, while
* if, elseif, else
* variable modifiers - For example {$variable|nl2br}
* user created functions
* mathematical evaluation within the template

along with other features. There are other template engines that also support these features. Smarty templates are often incorporated into existing PHP web applications to some extent. More often it is used where a web application or a website has a theme system built into it, where the templates can be changed from theme to theme.

Sunday, October 7, 2007

Cellphone:Nokia's aeon "full surface screen" cellphone concept




Nokia's Aeon: A concept phone that combines two touch-sensitive panels mounted on a fuel-cell power pack.keypad. Each of the panels are capable of being used independently. The touch screen displays all buttons that are virtual, so in one situation one panel could operate as the display, the other as the Nokia also establishes a new wireless standard with wibree, basically an upgraded bluetooth which would allow the Aeon to be a thin-client, farming out processing and storage.

The Aeon seems to be typical razer-thin candy bar form factor cell phone with no actual buttons. That can change into any kind of menu, button and keypad with a simple touch. The touch screen method brings up a ton of quirky problems like causing damage to the display with those pointy thumbs of yours.

The concept phone, dubbed Aeon, combines two touch-sensitive panels mounted on a fuel-cell power pack. The handset's connectivity and electronics are built into the panels to allow them to be used independently. When assembled, one panel would operate as the display, the other as the keypad. Since the buttons are entirely virtual, Aeon can flip instantly between a numeric pad for dialling, a text-entry pad for messaging and a media-player controller.

Nokia's vision of wearable technology users could wear the lightweight panels as a badge, or connected to a wrist strap. The most prominent design feature of aeon is a touchscreen that stretches over the full surface area of the phone, similar to benq siemens's black box concept phone .

read on nokia : click here to read more

full story:click here

Monday, October 1, 2007

SAP:SAP Customer Relationship Management:

What is SAP?
SAP, started in 1972 by five former IBM employees in Mannheim, Germany, states that it is the world's largest inter-enterprise software company and the world's fourth-largest independent software supplier, overall.
The original name for SAP was German: Systeme, Anwendungen, Produkte, German for "Systems Applications and Products." The original SAP idea was to provide customers with the ability to interact with a common corporate database for a comprehensive range of applications. Gradually, the applications have been assembled and today many corporations, including IBM and Microsoft, are using SAP products to run their own businesses.
SAP applications, built around their latest
R/3 system, provide the capability to manage financial, asset, and cost accounting, production operations and materials, personnel, plants, and archived documents. The R/3 system runs on a number of platforms including Windows 2000 and uses the client/server model. The latest version of R/3 includes a comprehensive Internet-enabled package.
SAP has recently recast its product offerings under a comprehensive Web interface, called mySAP.com, and added new
e-business applications, including customer relationship management (CRM) and supply chain management (SCM).
As of January 2007, SAP, a publicly traded company, had over 38,4000 employees in over 50 countries, and more than 36,200 customers around the world. SAP is turning its attention to small- and-medium sized businesses (
SMB). A recent R/3 version was provided for IBM's AS/400 platform.
SAP Customer Relationship Management:


Features & Functions

SAP Customer Relationship Management (SAP CRM) includes features and functions to support core business processes in the following areas:
Marketing – Analyze, plan, develop, and execute all marketing activities through all customer interaction points. This central marketing platform empowers marketers with complete business insights – enabling you to make intelligent business decisions and to drive end-to-end marketing processes. Quickly deploy marketing functionality in an on-demand model and transition to SAP CRM as business needs evolve.
Sales – Maintain focus on productive activity to acquire, grow, and retain profitable relationships with functionality for sales planning and forecasting, territories, accounts, contacts, activities, opportunities, quotations, orders, product configuration, pricing, billing, and contracts. Quickly deploy sales management functionality in an on-demand model and transition to SAP CRM as business needs evolve.
Service – Drive service revenue and profitability with support for service sales and marketing; service contract management; field service; e-service; workforce management; and channel service. Call centers, field service, and e-service provide various flexible delivery options. Quickly deploy service functionality in an on-demand model and transition to SAP CRM as business needs evolve.
Partner channel management – Attain a more profitable and loyal indirect channel by managing partner relationships and empowering channel partners. Improve processes for partner recruitment, partner management, communications, channel marketing, channel forecasting, collaborative selling, partner order management, channel service, and analytics for partners and channel managers.
Interaction center – Maximize customer loyalty, reduce costs, and boost revenue by transforming your interaction center into a strategic delivery channel for marketing, sales, and service efforts across all contact channels. Activities such as telemarketing, telesales, customer service, HR and IT help desk, and interaction center management are supported.
Web channel – Increase sales and reduce transaction costs by turning the Internet into a valuable sales, marketing, and service channel for businesses and consumers. Increase profitability and reach new markets with functionality for e-marketing, e-commerce, e-service, and Web channel analytics. Deploy these capabilities directly against SAP ERP or with SAP CRM as a fully integrated customer channel.
Business communications management – Manage inbound and outbound contacts across multiple locations and communications channels effectively and efficiently. By integrating multichannel communications with your customer-facing business processes, you can provide your customers and partners with a smooth, consistent experience across all avenues of contact, including voice, text messaging, Web contacts, and e-mail.

For More Information Click Here.

Sunday, September 30, 2007

Security : Steganography

Over the past couple of years, steganography has been the source of a lot of discussion, particularly as it was suspected that terrorists connected with the September 11 attacks might have used it for covert communications. While no such connection has been proven, the concern points out the effectiveness of steganography as a means of obscuring data. Indeed, along with encryption, steganography is one of the fundamental ways by which data can be kept confidential. This article will offer a brief introductory discussion of steganography: what it is, how it can be used, and the true implications it can have on information security.

What is Steganography?

While we are discussing it in terms of computer security, steganography is really nothing new, as it has been around since the times of ancient Rome. For example, in ancient Rome and Greece, text was traditionally written on wax that was poured on top of stone tablets. If the sender of the information wanted to obscure the message - for purposes of military intelligence, for instance - they would use steganography: the wax would be scraped off and the message would be inscribed or written directly on the tablet, wax would then be poured on top of the message, thereby obscuring not just its meaning but its very existence[1].

According to Dictionary.com, steganography (also known as "steg" or "stego") is "the art of writing in cipher, or in characters, which are not intelligible except to persons who have the key; cryptography" [2]. In computer terms, steganography has evolved into the practice of hiding a message within a larger one in such a way that others cannot discern the presence or contents of the hidden message[3]. In contemporary terms, steganography has evolved into a digital strategy of hiding a file in some form of multimedia, such as an image, an audio file (like a .wav or mp3) or even a video file.

What is Steganography Used for?

Like many security tools, steganography can be used for a variety of reasons, some good, some not so good. Legitimate purposes can include things like watermarking images for reasons such as copyright protection. Digital watermarks (also known as fingerprinting, significant especially in copyrighting material) are similar to steganography in that they are overlaid in files, which appear to be part of the original file and are thus not easily detectable by the average person. Steganography can also be used as a way to make a substitute for a one-way hash value (where you take a variable length input and create a static length output string to verify that no changes have been made to the original variable length input)[4]. Further, steganography can be used to tag notes to online images (like post-it notes attached to paper files). Finally, steganography can be used to maintain the confidentiality of valuable information, to protect the data from possible sabotage, theft, or unauthorized viewing[5].

Unfortunately, steganography can also be used for illegitimate reasons. For instance, if someone was trying to steal data, they could conceal it in another file or files and send it out in an innocent looking email or file transfer. Furthermore, a person with a hobby of saving pornography, or worse, to their hard drive, may choose to hide the evidence through the use of steganography. And, as was pointed out in the concern for terroristic purposes, it can be used as a means of covert communication. Of course, this can be both a legitimate and an illegitimate application.

Steganography Tools

There are a vast number of tools that are available for steganography. An important distinction that should be made among the tools available today is the difference between tools that do steganography, and tools that do steganalysis, which is the method of detecting steganography and destroying the original message. Steganalysis focuses on this aspect, as opposed to simply discovering and decrypting the message, because this can be difficult to do unless the encryption keys are known.

A comprehensive discussion of steganography tools is beyond the scope of this article. However, there are many good places to find steganography tools on the Net. One good place to start your search for stego tools is on Neil Johnson's Steganography and Digital Watermarking Web site. The site includes an extensive list of steganography tools. Another comprehensive tools site is located at the StegoArchive.com.

For steganalysis tools, a good site to start with is Neil Johnson's Steganalysis site. Niels Provos's site, is also a great reference site, but is currently being relocated, so keep checking back on its progress.

The plethora of tools available also tends to span the spectrum of operating systems. Windows, DOS, Linux, Mac, Unix: you name it, and you can probably find it.

How Do Steganography Tools Work?

To show how easy steganography is, I started out by downloading one of the more popular freeware tools out now: F5, then moved to a tool called SecurEngine, which hides text files within larger text files, and lastly a tool that hides files in MP3s called MP3Stego. I also tested one commercial steganography product, Steganos Suite.

F5 was developed by Andreas Westfield, and runs as a DOS client. A couple of GUIs were later developed: one named "Frontend", developed by Christian Wohne and the other, named "Stegano", by Thomas Biel. I tried F5, beta version 12. I found it very easy to encode a message into a JPEG file, even if the buttons in the GUI are written in German! Users can simply do this by following the buttons, inputting the JPEG file path, then the location of the data that is being hidden (in my case, I used a simple text file created in Notepad), at which point the program prompts the user for a pass phrase. As you can see by the before and after pictures below, it is very hard to tell them apart, embedded message or not.

Steganography and Security

As mentioned previously, steganography is an effective means of hiding data, thereby protecting the data from unauthorized or unwanted viewing. But stego is simply one of many ways to protect the confidentiality of data. It is probably best used in conjunction with another data-hiding method. When used in combination, these methods can all be a part of a layered security approach. Some good complementary methods include:

  • Encryption - Encryption is the process of passing data or plaintext through a series of mathematical operations that generate an alternate form of the original data known as ciphertext. The encrypted data can only be read by parties who have been given the necessary key to decrypt the ciphertext back into its original plaintext form. Encryption doesn't hide data, but it does make it hard to read!
  • Hidden directories (Windows) - Windows offers this feature, which allows users to hide files. Using this feature is as easy as changing the properties of a directory to "hidden", and hoping that no one displays all types of files in their explorer.
  • Hiding directories (Unix) - in existing directories that have a lot of files, such as in the /dev directory on a Unix implementation, or making a directory that starts with three dots (...) versus the normal single or double dot.
  • Covert channels - Some tools can be used to transmit valuable data in seemingly normal network traffic. One such tool is Loki. Loki is a tool that hides data in ICMP traffic (like ping).

Protecting Against Malicious Steganography

Unfortunately, all of the methods mentioned above can also be used to hide illicit, unauthorized or unwanted activity. What can you do to prevent or detect issues with stego? There is no easy answer. If someone has decided to hide their data, they will probably be able to do so fairly easily. The only way to detect steganography is to be actively looking for in specific files, or to get very lucky. Sometimes an actively enforced security policy can provide the answer: this would require the implementation of company-wide acceptable use policies that restrict the installation of unauthorized programs on company computers.

Using the tools that you already have to detect movement and behavior of traffic on your network may also be helpful. Network intrusion detection systems can help administrators to gain an understanding of normal traffic in and around your network and can thus assist in detecting any type of anomaly, especially with any changes in the behavior of increased movement of large images around your network. If the administrator is aware of this sort of anomalous activity, it may warrant further investigation. Host-based intrusion detection systems deployed on computers may also help to identify anomalous storage of image and/or video files.

A research paper by Stefan Hetzel cites two methods of attacking steganography, which really are also methods of detecting it. They are the visual attack (actually seeing the differences in the files that are encoded) and the statistical attack: "The idea of the statistical attack is to compare the frequency distribution of the colors of a potential stego file with the theoretically expected frequency distribution for a stego file." It might not be the quickest method of protection, but if you suspect this type of activity, it might be the most effective. For JPEG files specifically, a tool called Stegdetect, which looks for signs of steganography in JPEG files, can be employed. Stegbreak, a companion tool to Stegdetect, works to decrypt possible messages encoded in a suspected steganographic file, should that be the path you wish to take once the stego has been detected.

Conclusions

Steganography is a fascinating and effective method of hiding data that has been used throughout history. Methods that can be employed to uncover such devious tactics, but the first step are awareness that such methods even exist. There are many good reasons as well to use this type of data hiding, including watermarking or a more secure central storage method for such things as passwords, or key processes. Regardless, the technology is easy to use and difficult to detect. The more that you know about its features and functionality, the more ahead you will be in the game.

Resources:

[1] Steganography, by Neil F. Johnson, George Mason University,
http://www.jjtc.com/stegdoc/sec202.html

[2] http://dictionary.reference.com/search?q=steganography

[3] The Free On-line Dictionary of Computing, © 1993-2001 Denis Howe
http://www.nightflight.com/foldoc/index.html

[4] Applied Cryptography, Bruce Schneier, John Wiley and Sons Inc., 1996

[5] Steganography: Hidden Data, by Deborah Radcliff, June 10, 2002,
http://www.computerworld.com/securitytopics/security/story/0,10801,71726,00.html

Friday, September 28, 2007

SPM:Software Project Managment


Project Schedule


The project schedule is the core of the project plan. It is used by the project manager to commit people to the project and show the organization how the work will be performed. Schedules are used to communicate final deadlines and, in some cases, to determine resource needs. They are also used as a kind of checklist to make sure that every task necessary is performed. If a task is on the schedule, the team is committed to doing it. In other words, the project schedule is the means by which the project manager brings the team and the project under control.
Project ScheduleThe project schedule is a calendar that links the tasks to be done with the resources that will do them. Before a project schedule can be created, the project manager must have a work breakdown structure (WBS), an effort estimate for each task, and a resource list with availability for each resource. If these are not yet available, it may be possible to create something that looks like a schedule, but it will essentially be a work of fiction. A project manager’s time is better spent on working with the team to create a WBS and estimates (using a consensus-driven estimation method like Wideband Delphi—see Chapter 3) than on trying to build a project schedule without them. The reason for this is that a schedule itself is an estimate: each date in the schedule is estimated, and if those dates do not have the buy-in of the people who are going to do the work, the schedule will almost certainly be inaccurate.
The Wideband Delphi process is explained in detail in Chapter 3: Estimation. Read the full text of Chapter 3 (PDF)There are many project scheduling software products which can do much of the tedious work of calculating the schedule automatically, and plenty of books and tutorials dedicated to teaching people how to use them. However, before a project manager can use these tools, he should understand the concepts behind the WBS, dependencies, resource allocation, critical paths, Gantt charts and earned value. These are the real keys to planning a successful project.The most popular tool for creating a project schedule is Microsoft Project. There are also free and open source project scheduling tools available for most platforms which feature task lists, resource allocation, predecessors and Gantt charts. Other project scheduling software packages include:
Open Workbench
dotProject
netOffice
TUTOS
Allocate Resources to the TasksThe first step in building the project schedule is to identify the resources required to perform each of the tasks required to complete the project. (Generating project tasks is explained in more detail in the Wideband Delphi Estimation Process page.) A resource is any person, item, tool, or service that is needed by the project that is either scarce or has limited availability.Many project managers use the terms “resource” and “person” interchangeably, but people are only one kind of resource. The project could include computer resources (like shared computer room, mainframe, or server time), locations (training rooms, temporary office space), services (like time from contractors, trainers, or a support team), and special equipment that will be temporarily acquired for the project. Most project schedules only plan for human resources—the other kinds of resources are listed in the resource list, which is part of the project plan.One or more resources must be allocated to each task. To do this, the project manager must first assign the task to people who will perform it. For each task, the project manager must identify one or more people on the resource list capable of doing that task and assign it to them. Once a task is assigned, the team member who is performing it is not available for other tasks until the assigned task is completed. While some tasks can be assigned to any team member, most can be performed only by certain people. If those people are not available, the task must wait.
Identify DependenciesOnce resources are allocated, the next step in creating a project schedule is to identify dependencies between tasks. A task has a dependency if it involves an activity, resource, or work product that is subsequently required by another task. Dependencies come in many forms: a test plan can’t be executed until a build of the software is delivered; code might depend on classes or modules built in earlier stages; a user interface can’t be built until the design is reviewed. If Wideband Delphi is used to generate estimates, many of these dependencies will already be represented in the assumptions. It is the project manager’s responsibility to work with everyone on the engineering team to identify these dependencies. The project manager should start by taking the WBS and adding dependency information to it: each task in the WBS is given a number, and the number of any task that it is dependent on should be listed next to it as a predecessor. The following figure shows the four ways in which one task can be dependent on another.



Create the Schedule


Once the resources and dependencies are assigned, the software will arrange the tasks to reflect the dependencies. The software also allows the project manager to enter effort and duration information for each task; with this information, it can calculate a final date and build the schedule.

Each task is represented by a bar, and the dependencies between tasks are represented by arrows. Each arrow either points to the start or the end of the task, depending on the type of predecessor. The black diamond between tasks D and E is a milestone, or a task with no duration. Milestones are used to show important events in the schedule. The black bar above tasks D and E is a summary task, which shows that these tasks are two subtasks of the same parent task. Summary tasks can contain other summary tasks as subtasks. For example, if the team used an extra Wideband Delphi session to decompose a task in the original WBS into subtasks, the original task should be shown as a summary task with the results of the second estimation session as its subtasks.