Bharat Banate's Work Profile

View Bharat Banate's profile on LinkedIn

Monday, December 10, 2007

Enterprise JavaBeans (EJB) :Introduction

Enterprise JavaBeans (EJB) technology is the server-side component architecture for Java Platform, Enterprise Edition (Java EE). EJB technology enables rapid and simplified development of distributed, transactional, secure and portable applications based on Java technology.



The EJB specification intends to provide a standard way to implement the back-end 'business' code typically found in enterprise applications (as opposed to 'front-end' user-interface code). Such code was frequently found to reproduce the same types of problems, and it was found that solutions to these problems are often repeatedly re-implemented by programmers. Enterprise Java Beans were intended to handle such common concerns as persistence, transactional integrity, and security in a standard way, leaving programmers free to concentrate on the particular problem at hand.

EJB types
Stateful Session Beans are distributed objects having state: that is, they keep track of which calling program they are dealing with throughout a session. For example, checking out in a web store might be handled by a stateful session bean, which would use its state to keep track of where the customer is in the checkout process. On the other hand, sending an e-mail to customer support might be handled by a stateless bean, since this is a one-off operation and not part of a multi-step process. Stateful session beans' state may be persisted, but access to the bean instance is limited to only one client. Stateless Session Beans are distributed objects that do not have state associated with them thus allowing concurrent access to the bean. The contents of instance variables are not guaranteed to be preserved across method calls. The lack of overhead to maintain a conversation with the calling program makes them less resource-intensive than stateful beans.

Message Driven Beans were introduced in the EJB 2.0 specification. which is supported by Java 2 Platform, Enterprise Edition 1.3 or higher. The message bean represents the integration of JMS (Java Message Service) with EJB to create an entirely new type of bean designed to handle asynchronous JMS messages. Message Driven Beans are distributed objects that behave asynchronously. That is, they handle operations that do not require an immediate response. For example, a user of a website clicking on a "keep me informed of future updates" box may trigger a call to a Message Driven Bean to add the user to a list in the company's database. (This call is asynchronous because the user does not need to wait to be informed of its success or failure.) These beans subscribe to JMS (Java Message Service) message queues or message topics. They were added in the EJB 2.0 specification to allow event-driven processing inside EJB Container. Unlike other types of beans, MDB does not have a client view (Remote/Home interfaces), i.e. clients can not look-up an MDB instance. It just listens for any incoming message on a JMS queue (or topic) and processes them automatically.

Previous versions of EJB also used a type of bean known as an Entity Bean. These were distributed objects having persistent state. Beans in which their container managed the persistent state were said to be using Container-Managed Persistence (CMP), whereas beans that managed their own state were said to be using Bean-Managed Persistence (BMP). Entity Beans were replaced by the Java Persistence API in EJB 3.0, though as of 2007, CMP 2.x style Entity beans are still available for backward compatibility.

Other types of Enterprise Beans have been proposed. For instance, Enterprise Media Beans (JSR 86) address the integration of multimedia objects in Java EE applications.

Monday, December 3, 2007

The Year 2038 Bug

It's barely 8 years since we had the millenium bug so don't say you didn't get enough warning! A lot of systems in the world may have date rollover troubles in a fraction over 30 years time. The millenium bug (more accurately known as the Two Digit Century Rollover Bug) was caused by using 2 digits instead of 4 for the year. So Christmas 2007 falls on 12/25/07. Of course when 1999 rolled over to 2000 then the first day of the new century became 01/01/00 and this could have had serious consequences had all the old systems not been sorted out in advance. This problem will also happen again in 2099, 2199 etc if anyone is silly enough to keep using two digit year dates.

But the Unix bug will occur in 2038. That's because the date system started in 1970 and uses a time_t (signed int) to hold the number of seconds. The highest value is 2147483648-1 which is 24855.13 days. Add that to Jan 1 1970 and you get Jan 19 2038! So sometime early on that morning of that date, any software using a signed int for a date will rollover to Jan 1 1970! So how you are going to cope up with this problem dudes....!!!

Sunday, November 4, 2007

Storage :1TB Hard Disk Drive


Recently Both Major companies Hitachi and Seagate have launched the disk drives of 1TB(1024 GB) which is major milestone in storage world.
In INDIAN markets currently Hitechi launched the drives of 1 tb.
“Growing volumes of songs, movies, personal videos, pictures and games stored on our PCs highlights a ready market for higher capacity HDDs in India,”.

“At 133 gigabits per square inch, the Hitachi 1TB hard drive crams much more data per square inch than any other HDD available in the market today. Quieter acoustics, lower heat dissipation levels and much faster read/write speeds, makes this family of Hitachi HDDs a must have for all PC users,” he added further.




The 3.5inch drive belongs to the 7200 RPM family of Hitachi 1TB SATA hard disk drives. These storage units can be used for business, commercial, gaming, media centre PCs and also in external storage devices. The new drive is equipped with reliable perpendicular magnetic recoding technology, robust 3GB SATA interface and enhanced Rotational Vibration Safeguard (RVS) technology.

Besides, the 1TB drive ensures the fast data transfer rates, low power consumption and advanced shock protection. Hitachi 1TB SATA hard disk comes along with a 5-year warranty.

System, designed to sustain performance in densely packed multi-drive systems.

Seagate Barracuda 7200.11 hard drive consumes 13W of power in comparison to Hitachi’s 1-TB hard drive which draws about 13.6W. In addition, the Seagate new 1TB hard drive has just 4 platters which results in cool operating temperatures and low-power consumption helping longer hard disk life with less chances for wear and tear.


Seagate claims that the Barracuda 7200.11 1TB hard drive unit is a newly designed product optimised for demanding business-critical and nearline enterprise storage environments including: networked and tiered storage solutions, reference/compliance storage, disc-to-disc backup and restore, archiving solutions, rich media content storage and collaboration.

Company also claims that the new Barracuda 7200.11 hard drive also boosts reliability with an unrecoverable error rate that is 10 times better than desktop class drives and a 1.2 million hour Mean Time Between Failure at full 24 x 7 data


Read More:
Seagate 1 TB HDD
and Hitechi 1 TB HDD

Also Its Interesting: hitachi

Thursday, November 1, 2007

Software Testing:Key Concepts

Taxonomy
There is a plethora of testing methods and testing techniques, serving multiple purposes in different life cycle phases. Classified by purpose, software testing can be divided into: correctness testing, performance testing, reliability testing and security testing. Classified by life-cycle phase, software testing can be classified into the following categories: requirements phase testing, design phase testing, program phase testing, evaluating test results, installation phase testing, acceptance testing and maintenance testing. By scope, software testing can be categorized as follows: unit testing, component testing, integration testing, and system testing
Correctness testing
Correctness is the minimum requirement of software, the essential purpose of testing. Correctness testing will need some type of oracle, to tell the right behavior from the wrong one. The tester may or may not know the inside details of the software module under test, e.g. control flow, data flow, etc. Therefore, either a white-box point of view or black-box point of view can be taken in testing software. We must note that the black-box and white-box ideas are not limited in correctness testing only.

Black-box testing
The black-box approach is a testing method in which test data are derived from the specified functional requirements without regard to the final program structure. [Perry90] It is also termed data-driven, input/output driven [Myers79], or requirements-based [Hetzel88] testing. Because only the functionality of the software module is of concern, black-box testing also mainly refers to functional testing -- a testing method emphasized on executing the functions and examination of their input and output data. [Howden87] The tester treats the software under test as a black box -- only the inputs, outputs and specification are visible, and the functionality is determined by observing the outputs to corresponding inputs. In testing, various inputs are exercised and the outputs are compared against specification to validate the correctness. All test cases are derived from the specification. No implementation details of the code are considered.

It is obvious that the more we have covered in the input space, the more problems we will find and therefore we will be more confident about the quality of the software. Ideally we would be tempted to exhaustively test the input space. But as stated above, exhaustively testing the combinations of valid inputs will be impossible for most of the programs, let alone considering invalid inputs, timing, sequence, and resource variables. Combinatorial explosion is the major roadblock in functional testing. To make things worse, we can never be sure whether the specification is either correct or complete. Due to limitations of the language used in the specifications (usually natural language), ambiguity is often inevitable. Even if we use some type of formal or restricted language, we may still fail to write down all the possible cases in the specification. Sometimes, the specification itself becomes an intractable problem: it is not possible to specify precisely every situation that can be encountered using limited words. And people can seldom specify clearly what they want -- they usually can tell whether a prototype is, or is not, what they want after they have been finished. Specification problems contributes approximately 30 percent of all bugs in software. [Beizer95]

The research in black-box testing mainly focuses on how to maximize the effectiveness of testing with minimum cost, usually the number of test cases. It is not possible to exhaust the input space, but it is possible to exhaustively test a subset of the input space. Partitioning is one of the common techniques. If we have partitioned the input space and assume all the input values in a partition is equivalent, then we only need to test one representative value in each partition to sufficiently cover the whole input space. Domain testing [Beizer95] partitions the input domain into regions, and consider the input values in each domain an equivalent class. Domains can be exhaustively tested and covered by selecting a representative value(s) in each domain. Boundary values are of special interest. Experience shows that test cases that explore boundary conditions have a higher payoff than test cases that do not. Boundary value analysis [Myers79] requires one or more boundary values selected as representative test cases. The difficulties with domain testing are that incorrect domain definitions in the specification can not be efficiently discovered.

Good partitioning requires knowledge of the software structure. A good testing plan will not only contain black-box testing, but also white-box approaches, and combinations of the two.

White-box testing
Contrary to black-box testing, software is viewed as a white-box, or glass-box in white-box testing, as the structure and flow of the software under test are visible to the tester. Testing plans are made according to the details of the software implementation, such as programming language, logic, and styles. Test cases are derived from the program structure. White-box testing is also called glass-box testing, logic-driven testing [Myers79] or design-based testing [Hetzel88].

There are many techniques available in white-box testing, because the problem of intractability is eased by specific knowledge and attention on the structure of the software under test. The intention of exhausting some aspect of the software is still strong in white-box testing, and some degree of exhaustion can be achieved, such as executing each line of code at least once (statement coverage), traverse every branch statements (branch coverage), or cover all the possible combinations of true and false condition predicates (Multiple condition coverage). [Parrington89]

Control-flow testing, loop testing, and data-flow testing, all maps the corresponding flow structure of the software into a directed graph. Test cases are carefully selected based on the criterion that all the nodes or paths are covered or traversed at least once. By doing so we may discover unnecessary "dead" code -- code that is of no use, or never get executed at all, which can not be discovered by functional testing.

In mutation testing, the original program code is perturbed and many mutated programs are created, each contains one fault. Each faulty version of the program is called a mutant. Test data are selected based on the effectiveness of failing the mutants. The more mutants a test case can kill, the better the test case is considered. The problem with mutation testing is that it is too computationally expensive to use. The boundary between black-box approach and white-box approach is not clear-cut. Many testing strategies mentioned above, may not be safely classified into black-box testing or white-box testing. It is also true for transaction-flow testing, syntax testing, finite-state testing, and many other testing strategies not discussed in this text. One reason is that all the above techniques will need some knowledge of the specification of the software under test. Another reason is that the idea of specification itself is broad -- it may contain any requirement including the structure, programming language, and programming style as part of the specification content.

We may be reluctant to consider random testing as a testing technique. The test case selection is simple and straightforward: they are randomly chosen. Study in [Duran84] indicates that random testing is more cost effective for many programs. Some very subtle errors can be discovered with low cost. And it is also not inferior in coverage than other carefully designed testing techniques. One can also obtain reliability estimate using random testing results based on operational profiles. Effectively combining random testing with other testing techniques may yield more powerful and cost-effective testing strategies.

Performance testing
Not all software systems have specifications on performance explicitly. But every system will have implicit performance requirements. The software should not take infinite time or infinite resource to execute. "Performance bugs" sometimes are used to refer to those design problems in software that cause the system performance to degrade.

Performance has always been a great concern and a driving force of computer evolution. Performance evaluation of a software system usually includes: resource usage, throughput, stimulus-response time and queue lengths detailing the average or maximum number of tasks waiting to be serviced by selected resources. Typical resources that need to be considered include network bandwidth requirements, CPU cycles, disk space, disk access operations, and memory usage [Smith90]. The goal of performance testing can be performance bottleneck identification, performance comparison and evaluation, etc. The typical method of doing performance testing is using a benchmark -- a program, workload or trace designed to be representative of the typical system usage. [Vokolos98]

Reliability testing
Software reliability refers to the probability of failure-free operation of a system. It is related to many aspects of software, including the testing process. Directly estimating software reliability by quantifying its related factors can be difficult. Testing is an effective sampling method to measure software reliability. Guided by the operational profile, software testing (usually black-box testing) can be used to obtain failure data, and an estimation model can be further used to analyze the data to estimate the present reliability and predict future reliability. Therefore, based on the estimation, the developers can decide whether to release the software, and the users can decide whether to adopt and use the software. Risk of using software can also be assessed based on reliability information. [Hamlet94] advocates that the primary goal of testing should be to measure the dependability of tested software.

There is agreement on the intuitive meaning of dependable software: it does not fail in unexpected or catastrophic ways. [Hamlet94] Robustness testing and stress testing are variances of reliability testing based on this simple criterion.

The robustness of a software component is the degree to which it can function correctly in the presence of exceptional inputs or stressful environmental conditions. [IEEE90] Robustness testing differs with correctness testing in the sense that the functional correctness of the software is not of concern. It only watches for robustness problems such as machine crashes, process hangs or abnormal termination. The oracle is relatively simple, therefore robustness testing can be made more portable and scalable than correctness testing. This research has drawn more and more interests recently, most of which uses commercial operating systems as their target, such as the work in [Koopman97] [Kropp98] [Ghosh98] [Devale99] [Koopman99].

Stress testing, or load testing, is often used to test the whole system rather than the software alone. In such tests the software or system are exercised with or beyond the specified limits. Typical stress includes resource exhaustion, bursts of activities, and sustained high loads.

Security testing
Software quality, reliability and security are tightly coupled. Flaws in software can be exploited by intruders to open security holes. With the development of the Internet, software security problems are becoming even more severe.

Many critical software applications and services have integrated security measures against malicious attacks. The purpose of security testing of these systems include identifying and removing software flaws that may potentially lead to security violations, and validating the effectiveness of security measures. Simulated security attacks can be performed to find vulnerabilities.

Wednesday, October 31, 2007

IT News:Democratising IT

A New Model For PC Penetration
INDIA has emerged as a global leader in the advance of information technology. Yet the country faces a fundamental challenge — building on its successes by enabling greater access to technology for its people. This will drive expanded economic growth and opportunity. Less than 3% of Indians own a personal computer — compared to nearly 8% of Chinese, almost 14% of Brazilians and more than 15% of Russians. Despite the very low penetration of computers in India, the impact has been profound. India is home to three of the world’s 10 biggest IT firms — Tata, Infosys, and Wipro, and already generates nearly $40 billion in revenues from its IT software and services sector. Nasscom forecasts this figure to grow by nearly 27% next year. It must be recognised that the benefits of broader IT use and deeper Internet access are substantial, and will be a catalyst for — not a result of — economic growth and modernisation. India is already benefiting from e-governance initiatives that deliver real-time tallying of results of the world’s largest elections and from technology-driven distance learning that brings the world’s educational resources to students without regard to location or economic background. But cost has been a major roadblock for broader technology adoption in India. Reducing taxes and tariffs is essential to facilitating broader access to technology and driving growth in the technology sectors. Global hardware exports are 43% of Chinese exports versus only 2.3% for India. India is clearly missing out on a big opportunity. If it doesn’t act soon, investments will go further into China and emerging countries such as Vietnam, instead of India.
Consider also that, in India, a typical desktop computer costs 44% of the average Indian’s annual wage. Brazil’s experience in supporting technology adoption is particularly instructive. Since reducing taxes on computer purchases two years ago, the PC market tripled, and more than two million families bought their first PC, making Brazil the world’s fourth-largest PC market. What was more important was the multiplier effect this had on the economy. Thousands of IT industry jobs were created and government revenue from the IT sector increased by 50%. But cost isn’t the only barrier. IT complexity will also threaten access to technology while increasing its cost and environmental impact. We are all members of what we at Dell call the ReGeneration — a new global movement concerned with the regeneration of not just our businesses but also our planet. Environmental protection efforts are improving, as reflected in the Nobel Prize jointly awarded to former US vice-president Al Gore and the Intergovernmental Panel on Climate Change headed by Rajendra Pachauri. And technology is an important part of these efforts. The future will bring even more benefits.
By 2020 microprocessors will run one thousand times as many computations per second as they do today. That will mean enormous gains in productivity and efficiency, giving people unimaginable power to access, organise, and transform information. Indian citizens will more fully benefit from this progress as government and industry leaders strengthen their cooperation. This will help create the conditions in which IT can flourish and reach all people, businesses, and institutions across the country. India plays a pivotal role in global IT. Technology users in the western world benefit every day from the work of bright, talented Indian employees and their constant innovation. But more than serving as the world’s software writer or back office, India is harnessing the productivity, efficiency, and innovation benefits of IT as a foundation for global economic competitiveness. I see industry working, with great commitment, with India’s government to build on this progress, and to help further democratize access to technology, so that more Indian citizens enjoy even more of technology’s benefits with an ever-decreasing impact on our environment. That is our shared responsibility. By harnessing these forces — the democratization and simplification of technology, we can make a positive impact not just on our economies, but also our planet.

(Michael Dell)