Bharat Banate's Work Profile

View Bharat Banate's profile on LinkedIn

Sunday, September 30, 2007

Security : Steganography

Over the past couple of years, steganography has been the source of a lot of discussion, particularly as it was suspected that terrorists connected with the September 11 attacks might have used it for covert communications. While no such connection has been proven, the concern points out the effectiveness of steganography as a means of obscuring data. Indeed, along with encryption, steganography is one of the fundamental ways by which data can be kept confidential. This article will offer a brief introductory discussion of steganography: what it is, how it can be used, and the true implications it can have on information security.

What is Steganography?

While we are discussing it in terms of computer security, steganography is really nothing new, as it has been around since the times of ancient Rome. For example, in ancient Rome and Greece, text was traditionally written on wax that was poured on top of stone tablets. If the sender of the information wanted to obscure the message - for purposes of military intelligence, for instance - they would use steganography: the wax would be scraped off and the message would be inscribed or written directly on the tablet, wax would then be poured on top of the message, thereby obscuring not just its meaning but its very existence[1].

According to Dictionary.com, steganography (also known as "steg" or "stego") is "the art of writing in cipher, or in characters, which are not intelligible except to persons who have the key; cryptography" [2]. In computer terms, steganography has evolved into the practice of hiding a message within a larger one in such a way that others cannot discern the presence or contents of the hidden message[3]. In contemporary terms, steganography has evolved into a digital strategy of hiding a file in some form of multimedia, such as an image, an audio file (like a .wav or mp3) or even a video file.

What is Steganography Used for?

Like many security tools, steganography can be used for a variety of reasons, some good, some not so good. Legitimate purposes can include things like watermarking images for reasons such as copyright protection. Digital watermarks (also known as fingerprinting, significant especially in copyrighting material) are similar to steganography in that they are overlaid in files, which appear to be part of the original file and are thus not easily detectable by the average person. Steganography can also be used as a way to make a substitute for a one-way hash value (where you take a variable length input and create a static length output string to verify that no changes have been made to the original variable length input)[4]. Further, steganography can be used to tag notes to online images (like post-it notes attached to paper files). Finally, steganography can be used to maintain the confidentiality of valuable information, to protect the data from possible sabotage, theft, or unauthorized viewing[5].

Unfortunately, steganography can also be used for illegitimate reasons. For instance, if someone was trying to steal data, they could conceal it in another file or files and send it out in an innocent looking email or file transfer. Furthermore, a person with a hobby of saving pornography, or worse, to their hard drive, may choose to hide the evidence through the use of steganography. And, as was pointed out in the concern for terroristic purposes, it can be used as a means of covert communication. Of course, this can be both a legitimate and an illegitimate application.

Steganography Tools

There are a vast number of tools that are available for steganography. An important distinction that should be made among the tools available today is the difference between tools that do steganography, and tools that do steganalysis, which is the method of detecting steganography and destroying the original message. Steganalysis focuses on this aspect, as opposed to simply discovering and decrypting the message, because this can be difficult to do unless the encryption keys are known.

A comprehensive discussion of steganography tools is beyond the scope of this article. However, there are many good places to find steganography tools on the Net. One good place to start your search for stego tools is on Neil Johnson's Steganography and Digital Watermarking Web site. The site includes an extensive list of steganography tools. Another comprehensive tools site is located at the StegoArchive.com.

For steganalysis tools, a good site to start with is Neil Johnson's Steganalysis site. Niels Provos's site, is also a great reference site, but is currently being relocated, so keep checking back on its progress.

The plethora of tools available also tends to span the spectrum of operating systems. Windows, DOS, Linux, Mac, Unix: you name it, and you can probably find it.

How Do Steganography Tools Work?

To show how easy steganography is, I started out by downloading one of the more popular freeware tools out now: F5, then moved to a tool called SecurEngine, which hides text files within larger text files, and lastly a tool that hides files in MP3s called MP3Stego. I also tested one commercial steganography product, Steganos Suite.

F5 was developed by Andreas Westfield, and runs as a DOS client. A couple of GUIs were later developed: one named "Frontend", developed by Christian Wohne and the other, named "Stegano", by Thomas Biel. I tried F5, beta version 12. I found it very easy to encode a message into a JPEG file, even if the buttons in the GUI are written in German! Users can simply do this by following the buttons, inputting the JPEG file path, then the location of the data that is being hidden (in my case, I used a simple text file created in Notepad), at which point the program prompts the user for a pass phrase. As you can see by the before and after pictures below, it is very hard to tell them apart, embedded message or not.

Steganography and Security

As mentioned previously, steganography is an effective means of hiding data, thereby protecting the data from unauthorized or unwanted viewing. But stego is simply one of many ways to protect the confidentiality of data. It is probably best used in conjunction with another data-hiding method. When used in combination, these methods can all be a part of a layered security approach. Some good complementary methods include:

  • Encryption - Encryption is the process of passing data or plaintext through a series of mathematical operations that generate an alternate form of the original data known as ciphertext. The encrypted data can only be read by parties who have been given the necessary key to decrypt the ciphertext back into its original plaintext form. Encryption doesn't hide data, but it does make it hard to read!
  • Hidden directories (Windows) - Windows offers this feature, which allows users to hide files. Using this feature is as easy as changing the properties of a directory to "hidden", and hoping that no one displays all types of files in their explorer.
  • Hiding directories (Unix) - in existing directories that have a lot of files, such as in the /dev directory on a Unix implementation, or making a directory that starts with three dots (...) versus the normal single or double dot.
  • Covert channels - Some tools can be used to transmit valuable data in seemingly normal network traffic. One such tool is Loki. Loki is a tool that hides data in ICMP traffic (like ping).

Protecting Against Malicious Steganography

Unfortunately, all of the methods mentioned above can also be used to hide illicit, unauthorized or unwanted activity. What can you do to prevent or detect issues with stego? There is no easy answer. If someone has decided to hide their data, they will probably be able to do so fairly easily. The only way to detect steganography is to be actively looking for in specific files, or to get very lucky. Sometimes an actively enforced security policy can provide the answer: this would require the implementation of company-wide acceptable use policies that restrict the installation of unauthorized programs on company computers.

Using the tools that you already have to detect movement and behavior of traffic on your network may also be helpful. Network intrusion detection systems can help administrators to gain an understanding of normal traffic in and around your network and can thus assist in detecting any type of anomaly, especially with any changes in the behavior of increased movement of large images around your network. If the administrator is aware of this sort of anomalous activity, it may warrant further investigation. Host-based intrusion detection systems deployed on computers may also help to identify anomalous storage of image and/or video files.

A research paper by Stefan Hetzel cites two methods of attacking steganography, which really are also methods of detecting it. They are the visual attack (actually seeing the differences in the files that are encoded) and the statistical attack: "The idea of the statistical attack is to compare the frequency distribution of the colors of a potential stego file with the theoretically expected frequency distribution for a stego file." It might not be the quickest method of protection, but if you suspect this type of activity, it might be the most effective. For JPEG files specifically, a tool called Stegdetect, which looks for signs of steganography in JPEG files, can be employed. Stegbreak, a companion tool to Stegdetect, works to decrypt possible messages encoded in a suspected steganographic file, should that be the path you wish to take once the stego has been detected.

Conclusions

Steganography is a fascinating and effective method of hiding data that has been used throughout history. Methods that can be employed to uncover such devious tactics, but the first step are awareness that such methods even exist. There are many good reasons as well to use this type of data hiding, including watermarking or a more secure central storage method for such things as passwords, or key processes. Regardless, the technology is easy to use and difficult to detect. The more that you know about its features and functionality, the more ahead you will be in the game.

Resources:

[1] Steganography, by Neil F. Johnson, George Mason University,
http://www.jjtc.com/stegdoc/sec202.html

[2] http://dictionary.reference.com/search?q=steganography

[3] The Free On-line Dictionary of Computing, © 1993-2001 Denis Howe
http://www.nightflight.com/foldoc/index.html

[4] Applied Cryptography, Bruce Schneier, John Wiley and Sons Inc., 1996

[5] Steganography: Hidden Data, by Deborah Radcliff, June 10, 2002,
http://www.computerworld.com/securitytopics/security/story/0,10801,71726,00.html

Friday, September 28, 2007

SPM:Software Project Managment


Project Schedule


The project schedule is the core of the project plan. It is used by the project manager to commit people to the project and show the organization how the work will be performed. Schedules are used to communicate final deadlines and, in some cases, to determine resource needs. They are also used as a kind of checklist to make sure that every task necessary is performed. If a task is on the schedule, the team is committed to doing it. In other words, the project schedule is the means by which the project manager brings the team and the project under control.
Project ScheduleThe project schedule is a calendar that links the tasks to be done with the resources that will do them. Before a project schedule can be created, the project manager must have a work breakdown structure (WBS), an effort estimate for each task, and a resource list with availability for each resource. If these are not yet available, it may be possible to create something that looks like a schedule, but it will essentially be a work of fiction. A project manager’s time is better spent on working with the team to create a WBS and estimates (using a consensus-driven estimation method like Wideband Delphi—see Chapter 3) than on trying to build a project schedule without them. The reason for this is that a schedule itself is an estimate: each date in the schedule is estimated, and if those dates do not have the buy-in of the people who are going to do the work, the schedule will almost certainly be inaccurate.
The Wideband Delphi process is explained in detail in Chapter 3: Estimation. Read the full text of Chapter 3 (PDF)There are many project scheduling software products which can do much of the tedious work of calculating the schedule automatically, and plenty of books and tutorials dedicated to teaching people how to use them. However, before a project manager can use these tools, he should understand the concepts behind the WBS, dependencies, resource allocation, critical paths, Gantt charts and earned value. These are the real keys to planning a successful project.The most popular tool for creating a project schedule is Microsoft Project. There are also free and open source project scheduling tools available for most platforms which feature task lists, resource allocation, predecessors and Gantt charts. Other project scheduling software packages include:
Open Workbench
dotProject
netOffice
TUTOS
Allocate Resources to the TasksThe first step in building the project schedule is to identify the resources required to perform each of the tasks required to complete the project. (Generating project tasks is explained in more detail in the Wideband Delphi Estimation Process page.) A resource is any person, item, tool, or service that is needed by the project that is either scarce or has limited availability.Many project managers use the terms “resource” and “person” interchangeably, but people are only one kind of resource. The project could include computer resources (like shared computer room, mainframe, or server time), locations (training rooms, temporary office space), services (like time from contractors, trainers, or a support team), and special equipment that will be temporarily acquired for the project. Most project schedules only plan for human resources—the other kinds of resources are listed in the resource list, which is part of the project plan.One or more resources must be allocated to each task. To do this, the project manager must first assign the task to people who will perform it. For each task, the project manager must identify one or more people on the resource list capable of doing that task and assign it to them. Once a task is assigned, the team member who is performing it is not available for other tasks until the assigned task is completed. While some tasks can be assigned to any team member, most can be performed only by certain people. If those people are not available, the task must wait.
Identify DependenciesOnce resources are allocated, the next step in creating a project schedule is to identify dependencies between tasks. A task has a dependency if it involves an activity, resource, or work product that is subsequently required by another task. Dependencies come in many forms: a test plan can’t be executed until a build of the software is delivered; code might depend on classes or modules built in earlier stages; a user interface can’t be built until the design is reviewed. If Wideband Delphi is used to generate estimates, many of these dependencies will already be represented in the assumptions. It is the project manager’s responsibility to work with everyone on the engineering team to identify these dependencies. The project manager should start by taking the WBS and adding dependency information to it: each task in the WBS is given a number, and the number of any task that it is dependent on should be listed next to it as a predecessor. The following figure shows the four ways in which one task can be dependent on another.



Create the Schedule


Once the resources and dependencies are assigned, the software will arrange the tasks to reflect the dependencies. The software also allows the project manager to enter effort and duration information for each task; with this information, it can calculate a final date and build the schedule.

Each task is represented by a bar, and the dependencies between tasks are represented by arrows. Each arrow either points to the start or the end of the task, depending on the type of predecessor. The black diamond between tasks D and E is a milestone, or a task with no duration. Milestones are used to show important events in the schedule. The black bar above tasks D and E is a summary task, which shows that these tasks are two subtasks of the same parent task. Summary tasks can contain other summary tasks as subtasks. For example, if the team used an extra Wideband Delphi session to decompose a task in the original WBS into subtasks, the original task should be shown as a summary task with the results of the second estimation session as its subtasks.


Security:Firewall Information

What is a firewall?

A term borrowed either from construction—aircraft or automobile design--a firewall is a barrier that segregates two areas to protect one space from the environment of the other. In buildings or airframes, it is designed to prevent fire from spreading from one section to another. In racing, it protects the driver from a possible fuel tank fire. Also in automobiles, the bulkhead separating the engine compartment from the passenger compartment is called a firewall.

In computing terms, a firewall isolates a computer or network from another computer or network. Most often, this creates a so-called "trusted zone" on the inside of the firewall (your local network), which is protected from the untrusted zone outside (the internet). Some network firewalls sit between sections of the network; this creates DMZs, or De-Militarized Zones, referring to the military term for areas that separate two opposing factions to reduce the risk of war. Certain devices, such as public web servers, that need to interface more with untrusted zones will be in the DMZ with a firewall between them and the local network, offering more protection for that network.

As with firewalls in buildings, a certain amount of penetration of the firewall is allowed, but these penetrations, or ports, are controlled and safeguarded against bad stuff trying to get in.

Ports

In networking, one will often hear the term port. Ports, according to the Internet Assigned Numbers Authority (IANA, which coordinates functions for the internet), "name the ends of logical connections which carry long term conversations. For the purpose of providing services to unknown callers, a service contact port is defined." Essentially, this is an addressing scheme that allows the computer to assign meaning to incoming and outgoing information.

Ports fall into three categories:

  • Port numbers that range from 0 through 1023 are called Well Known Ports. On most systems, they can only be used by system (or root) processes or by programs executed by privileged users. The IANA has assigned specific uses for most of these ports.
  • The Registered Ports are those from 1024 through 49151 and can be used by ordinary user processes or programs executed by ordinary users. Many of these ports are also assigned.
  • The Dynamic and/or Private Ports are those from 49152 through 65535. The name is self-explanatory; they are not assigned.

So what firewalls do is filter the data coming into them, allowing information for certain ports to go through and rejecting others, according to preset rules. There are three different ways this is done:

Packet filtering - Packets (small chunks of data) are analyzed against a set of filters

Proxy service - Doesn't accept packets coming in from the untrusted zone unless they were specifically requested by a computer in the trusted zone.

Stateful inspection - Doesn't examine the entire incoming packet, but compares certain key parts of that packet to defining characteristics derived from packets traveling inside the firewall to the outside.



Thursday, September 27, 2007

Internet: Internet Radio

Internet radio (aka e-Radio) is an audio broadcasting service transmitted via the Internet. Broadcasting on the Internet is usually referred to as webcasting since it is not transmitted broadly through wireless means but is delivered over the World Wide Web. The term "e-Radio" suggests a streaming medium that presents listeners with a continuous stream of audio to which they have no control much like traditional broadcast media. It is not synonymous with podcasting which involves downloading and therefore copyright issues. Nor does e-Radio suggest "on-demand" file serving. Many Internet "radio stations" are associated with a corresponding traditional "terrestrial" radio station or radio network. Internet-only radio stations are usually independent of such associations.

Internet radio "stations" are usually accessible from anywhere in the world—for example, to listen to an Australian station from Europe or America. This makes it a popular service for expatriates and for listeners with interests not adequately served by local radio stations (such as progressive rock, anime themed music, classical music, 24-hour stand up comedy, and others). Some Internet radio services offer news, sports, talkback, and various genres of music—everything that is on the radio station being simulcast over the internet with a netcast stream.

Freedom of the Airwaves
Radio broadcasting began in the early 20s, but it wasn't until the introduction of the transistor radio in 1954 that radio became available in mobile situations. Internet radio is in much the same place. Until the 21st century, the only way to obtain radio broadcasts over the Internet was through your PC. That will soon change, as wireless connectivity will feed Internet broadcasts to car radios, PDAs and cell phones. The next generation of wireless devices will greatly expand the reach and convenience of Internet radio.

Uses and Advantages
Traditional radio station broadcasts are limited by two factors:
• The power of the station's transmitter (typically 100 miles)
• The available broadcast spectrum (you might get a couple of dozen radio stations locally)
Internet radio has no geographic limitations, so a broadcaster in Kuala Lumpor can be heard in Kansas on the Internet. The potential for Internet radio is as vast as cyberspace itself.

In comparison to traditional radio, Internet radio is not limited to audio. An Internet radio broadcast can be accompanied by photos or graphics, text and links, as well as interactivity, such as message boards and chat rooms. This advancement allows a listener to do more than listen. In the example at the beginning of this article, a listener who hears an ad for a computer printer ordered that printer through a link on the Internet radio broadcast Web site. The relationship between advertisers and consumers becomes more interactive and intimate on Internet radio broadcasts. This expanded media capability could also be used in other ways. For example, with Internet radio, you could conduct training or education and provide links to documents and payment options. You could also have interactivity with the trainer or educator and other information on the Internet radio broadcast site.

Internet radio programming offers a wide spectrum of broadcast genres, particularly in music. Broadcast radio is increasingly controlled by smaller numbers of media conglomerates. In some ways, this has led to more mainstreaming of the programming on broadcast radio, as stations often try to reach the largest possible audience in order to charge the highest possible rates to advertisers. Internet radio, on the other hand, offers the opportunity to expand the types of available programming. The cost of "getting on the air" is less for an Internet broadcaster and Internet radio can appeal to "micro-communities" of listeners focused on special music or interests.

Friday, September 21, 2007

Unix: The Unix Philosophy

Essentially, UNIX is made up of files. In fact, every aspect of UNIX is looked at as a file. When we write some data to be displayed on screen for example, the data is actually written to a screen file and then a certain device driver in the kernel is activated. This controls a particular device, in our case the screen. And the contents of the screen file are displayed on the screen. Files that relate to hardware are known as "special files".

We have one universal file - unix itself. But this file is broken up into many other smaller file systems. By default, i.e. when we install UNIX, there is one root and two user file systems created. Normally file systems correspond to physical sections of the disk, basically the root file system and many user file systems.

These file systems are again broken up into directories (which are again viewed as files) and files. These directories can further have sub-directories and files giving rise to a hierarchical tree-like structure.

In DOS, we sometimes divide the disk into logical sections like C and D. Each of these logical drives has its own set of directories and files. To move from one drive to another we just need to specify the drive as the DOS prompt and hit enter.

But while we are at one drive we can access a file from another drive. Now both these drives are always available by default. In UNIX there is a slight difference. While the root file system and the two user file systems that are created by default are loaded, access to any other file system is only possible if they are explicitely mounted. Mounting means nothing but loading them into memory. And considering that file systems are viewed by UNIX as files, if a time comes for them to be accessed, they have to be in memory (as like any other file).

For example, the floppy drive. This too is considered by UNIX as a file. And read or write to a floppy drive is first done in a "special file", from which then the contents are transferred to actual floppy. But to be able to access the floppy drive through the file connected to it, the file has to be mounted i.e. in memory.

New Technology:RFID from Microsoft

Executive Summary

Whatever you read about packaging, supply chains, or identification, you will come across an article or advertisement for Radio Frequency Identification (RFID). Why does it seem that this technology is being touted as the best thing since sliced bread? And is it just another piece of hype meant to confuse and make us invest money in another piece of technology?

RFID is evolving as a major technology enabler for identifying and tracking goods and assets around the world. It can help hospitals locate expensive equipment more quickly to improve patient care, pharmaceutical companies to reduce counterfeiting, and logistics providers to improve the management of moveable assets. It also promises to enable new efficiencies in the supply chain by tracking goods from the point of manufacture through to the retail point of sale (POS).

As a result of the potential benefits of RFID:

  • The automotive industry has been using closed-loop RFID systems to track and control major assemblies within a production plant for over 30 years.
  • Many of the world's major retailers have mandated RFID tagging for pallets and cases shipped into their distribution centers to provide better visibility.
  • There are moves in the defense and aerospace industry to mandate the use of RFID to improve supply chain visibility and ensure the authenticity of parts.
  • Regulatory bodies in the United States are moving to the use of ePedigrees based on RFID to prevent the counterfeiting of prescription drugs.
  • Hospitals are using RFID for patient identification and moveable asset tracking.
  • RFID tags are being used to track the movement of farm animals to assist with tracking issues when major animal diseases strike.

But while the technology has received more than its fair share of media coverage recently, many are still unfamiliar with RFID and the benefits it can offer. In the face of this need for clear, comprehensive information about RFID and its benefits, this paper defines the opportunities offered by the technology for all organizations involved in the production, movement, or sale of goods. It is equally relevant for organizations wishing to track or locate existing goods, assets, or equipment.

In addition, the paper seeks to outline the business and technical challenges to RFID deployment and demonstrates how these issues can be addressed with technology from Microsoft and its partners. Above all, it explains how Microsoft technology—which provides the software architecture underpinning the solution rather than the tags or readers—can support the deployment of RFID-based solutions.

What Is RFID Really?


But what is RFID? RFID is the reading of physical tags on single products, cases, pallets, or re-usable containers that emit radio signals to be picked up by reader devices. These devices and software must be supported by a sophisticated software architecture that enables the collection and distribution of location-based information in near real time. The complete RFID picture combines the technology of the tags and readers with access to global standardized databases, ensuring real time access to up-to-date information about relevant products at any point in the supply chain. A key component to this RFID vision is the EPC Global Network.

Tags contain a unique identification number called an Electronic Product Code (EPC), and potentially additional information of interest to manufacturers, healthcare organizations, military organizations, logistics providers, and retailers, or others that need to track the physical location of goods or equipment. All information stored on RFID tags accompanies items as they travel through a supply chain or other business process. All information on RFID tags, such as product attributes, physical dimensions, prices, or laundering requirements, can be scanned wirelessly by a reader at high speed and from a distance of several meters.

RFID Bill of Materials

So what is the bill of materials for RFID then? RFID Component parts are:

Tag or Transponder—An RFID tag is a tiny radio device that is also referred to as a transponder, smart tag, smart label, or radio barcode. The tag comprises a simple silicon microchip (typically less than half a millimeter in size) attached to a small flat aerial and mounted on a substrate. The whole device can then be encapsulated in different materials (such as plastic) dependent upon its intended usage. The finished tag can be attached to an object, typically an item, box, or pallet, and read remotely to ascertain its identity, position, or state. For an active tag there will also be a battery.

Reader or Interrogator—The reader—sometimes called an interrogator or scanner—sends and receives RF data to and from the tag via antennas. A reader may have multiple antennas that are responsible for sending and receiving radio waves.

Host Computer—The data acquired by the readers is then passed to a host computer, which may run specialist RFID software or middleware to filter the data and route it to the correct application, to be processed into useful information.


For Demo Click On This

Mobile Computing:Mobile IP-Part-III


Mobile Computing is becoming increasingly important due to the rise in the number of portable computers and the desire to have continuous network connectivity to the Internet irrespective of the physical location of the node. The Internet infrastructure is built on top of a collection of protocols, called the TCP/IP protocol suite. Transmission Control Protocol (TCP) and Internet Protocol (IP) are the core protocols in this suite. IP requires the location of any host connected to the Internet to be uniquely identified by an assigned IP address. This raises one of the most important issues in mobility, because when a host moves to another physical location, it has to change its IP address. However, the higher level protocols require IP address of a host to be fixed for identifying connections. The Mobile Internet Protocol (Mobile IP) is an extension to the Internet Protocol proposed by the Internet Engineering Task Force (IETF) that addresses this issue. It enables mobile computers to stay connected to the Internet regardless of their location and without changing their IP address. More precisely, Mobile IP is a standard protocol that builds on the Internet Protocol by making mobility transparent to applications and higher level protocols like TCP [6]. This article provides an introduction to Mobile IP and discusses its advantages and

Overview of the Protocol


Mobile IP supports mobility by transparently binding the home address of the mobile node with its care-of address. This mobility binding is maintained by some specialized routers known as mobility agents. Mobility agents are of two types - home agents and foreign agents. The home agent, a designated router in the home network of the mobile node, maintains the mobility binding in a mobility binding table where each entry is identified by the tuple . Figure 1 shows a mobility binding table. The purpose of this table is to map a mobile node's home address with its care-of address and forward packets accordingly.
Foreign agents are specialized routers on the foreign network where the mobile node is currently visiting. The foreign agent maintains a visitor list which contains information about the mobile nodes currently visiting that network. Each entry in the visitor list is identified by the tuple: <>. Figure 2 shows an instance of a visitor list.
In a typical scenario, the care-of address of a mobile node is the foreign agent's IP address. There can be another kind of care-of address, known as colocated care-of address, which is usually obtained by some external address assignment mechanism.

The basic Mobile IP protocol has four distinct stages [2]. These are:

  1. Agent Discovery: Agent Discovery consists of the following steps:
    1. Mobility agents advertise their presence by periodically broadcasting Agent Advertisement messages. An Agent Advertisement message lists one or more care-of addresses and a flag indicating whether it is a home agent or a foreign agent.
    2. The mobile node receiving the Agent Advertisement message observes whether the message is from its own home agent and determines whether it is on the home network or a foreign network.

    3. If a mobile node does not wish to wait for the periodic advertisement, it can send out Agent Solicitation messages that will be responded by a mobility agent.
  2. Registration: Registration consists of the following steps:
    1. If a mobile node discovers that it is on the home network, it operates without any mobility services.

    2. If the mobile node is on a new network, it registers with the foreign agent by sending a Registration Request message which includes the permanent IP address of the mobile host and the IP address of its home agent.

    3. The foreign agent in turn performs the registration process on behalf of the mobile host by sending a Registration Request containing the permanent IP address of the mobile node and the IP address of the foreign agent to the home agent.

    4. When the home agent receives the Registration Request, it updates the mobility binding by associating the care-of address of the mobile node with its home address.

    5. The home agent then sends an acknowledgement to the foreign agent.

    6. The foreign agent in turn updates its visitor list by inserting the entry for the mobile node and relays the reply to the mobile node.

    Figure 3 illustrates the registration process.

3.In Service: This stage can be subdivided into the following steps:

  1. When a correspondent node wants to communicate with the mobile node, it sends an IP packet addressed to the permanent IP address of the mobile node.

  2. The home agent intercepts this packet and consults the mobility binding table to find out if the mobile node is currently visiting any other network.

  3. The home agent finds out the mobile node's care-of address and constructs a new IP header that contains the mobile node's care-of address as the destination IP address. The original IP packet is put into the payload of this IP packet. It then sends the packet. This process of encapsulating one IP packet into the payload of another is known as IP-within-IP encapsulation [11], or tunneling.

  4. When the encapsulated packet reaches the mobile node's current network, the foreign agent decapsulates the packet and finds out the mobile node's home address. It then consults the visitor list to see if it has an entry for that mobile node.

  5. If there is an entry for the mobile node on the visitor list, the foreign agent retrieves the corresponding media address and relays it to the mobile node.

  6. When the mobile node wants to send a message to a correspondent node, it forwards the packet to the foreign agent, which in turn relays the packet to the correspondent node using normal IP routing.

  7. The foreign agent continues serving the mobile node until the granted lifetime expires. If the mobile node wants to continue the service, it has to reissue the Registration Request.
Figure 4 illustrates the tunneling operation.
4.Deregistration: If a mobile node wants to drop its care-of address, it has to deregister with its home agent. It achieves this by sending a Registration Request with the lifetime set to zero. There is no need for deregistering with the foreign agent as registration automatically expires when lifetime becomes zero. However if the mobile node visits a new network, the old foreign network does not know the new care-of address of the mobile node. Thus datagrams already forwarded by the home agent to the old foreign agent of the mobile node are lost.



IT News:Microsoft

MS in talks with retail chains for RFID software

MICROSOFT is in talks with retail biggies, financial service providers and government agencies for its new radio frequency identification device (RFID) software. The software, called Microsoft Biztalk server 2006 R2, can be used across sectors to improve business processes such as asset tracking, supply chain management and inventory control. This is possible as the software enables all types of RFID to become fully compatible with the Microsoft platform.
The US retail chain Wal-Mart adopted radio frequency identification device software nearly three years ago to improve its back-end operations. Retail chains in India could take a cue and use the new platform for inventory management. Banks, on the other hand, can use it to track transaction records or even their high-networth clients. RFID solutions

can be used for e-passports as well.
BizTalk RFID was developed at the MIDC centre in Hyderabad. Microsoft has tied up with over 100 partners including HP, Intel, TCS, small software and hardware vendors to develop the platform. In fact, TCS was among the early movers in developing RFID applications. “We have implemented this technology for ITC’s cigarette unit in Kolkata,” said Pradeep Misra, centre of excellence (RFID), TCS.
HP is also implementing the technology for Bajaj Auto in partnership with Microsoft. The technology can also serve many other enterprises. “With the drop in hardware prices and devices becoming standardised, RFID technology will become affordable. With our easy to use software, RFID technology will be ready for mass adoption in due course, said Srini Koppolu, corporate vice-president, Microsoft India Development Centre.

Thursday, September 20, 2007

Info: Ada Lovelace - First programmer

Lovelace was the daughter of Lord Byron and assistant to the mathematician, Charles Babbage, the inventor of the computer, and became the first computer programmer.
Augusta Ada Byron was born on 10th December 1815, in London, the daughter of the poet George Gordon Byron and Annabella Milbanke Byron. On 21st April 1816, Byron separated from his wife and left Britain for ever, leaving his wife and daughter, never to see them again.
Ada was educated privately by tutors and by her mother, who had an abiding interest in mathematics. Lord Byron once called her ‘the princess of parallelograms’. Her mother gave Ada regular lessons in maths, in the hope that the logical discipline would inhibit the onset of madness, which Annabella thought existed in the Byron family.
In 1835, Ada married William King, 8th Baron King and, when he was created an earl in 1838, she became countess of Lovelace. She became acquainted with Mary Somerville, a noted scientific author, who introduced her in turn to Charles Babbage, the inventor of a calculating machine, later to become known as the computer. She told Babbage that she was well acquainted with mathematics and offered to help with the construction of his machine. Babbage doubted that a woman would have sufficient knowledge of mathematics, to be of any value to him, but when Ada added that she had some knowledge of languages, Babbage hired her as a translator.
In 1843, Ada translated and annotated an article written by the Italian mathematician and engineer, Luigi Federico Menabrea, who had proposed new functions for Babbage’s Analytical Machine. Ada not only translated the article, but added her own details and annotations. Her elaborate annotations, especially her description of how the Analytical Engine could be programmed to compute and make calculations beyond the power of the human brain, earned her the title of first computer programmer. She wrote in her notes ‘the Analytical Engine weaves algebraic patterns, just as the Jacquard-loom weaves flowers and leaves.’ She added ‘the Engine might compose elaborate and scientific pieces of music of any degree of complexity or extent.’
In 1852, at the age of 36, Ada contracted cancer and was put in the care of physicians who recommended bloodletting. Unfortunately they went too far, as was often the case in the Nineteenth Century, and she bled to death. She died on 29th November 1852 and was buried with her father at the Byron family church in Hucknall.
The programming language Ada is named after her.

Mobile Computing:Mobile IP-Part-I

Introduction:-Mobile IP is the underlying technology for support of various mobile data and wireless networking applications. For example, GPRS depends on mobile IP to enable the relay of messages to a GPRS phone via the SGSN from the GGSN without the sending needing to know the serving node IP address.
The Impetus for Mobile IP:-With the advent of packet based mobile data applications and the increase of wireless computing, there is a corresponding need for the ability for seamless communication between the mobile node device and the packet data network (PDN) such as the Internet.
Mobile IP Definitions:-

Mobile Node: A device capable of performing network roaming

Home Agent: A router on the home network which serves as the a point for communications with the mobile node.

Foreign Agent: A router that functions as the mobile node's point of attachment when it travels to the foreign network.

Care of Address: Termination point of the tunnel toward the mobile node when it is not in the home network.

Correspondent Node: The device that the mobile node is communicating with such as a web server

Mobile IP in Operation:-

To accomplish this, mobile IP established the visited network as a foreign node and the home network as the home node. Mobile IP uses a tunneling protocol to allow messages from the PDN to be directed to the mobile node's IP address. This is accomplished by way of routing messages to the foreign node for delivery via tunneling the original IP address inside a packet destined for the temporary IP address assigned to the mobile node by the foreign node. The Home Agent and Foreign Agent continuously advertise their services on the network through an Agent Discovery process, enabling the Home Agent to recognize when a new Foreign Agent is aquired and allowing the Mobile Node to register a new Care of Address.

This method allows for seamless communications between the mobile node and applications residing on the PDN, allowing for seamless, always-on connectivity for mobile data applications and wireless computing.

Mobile IP enabled Applications:-Mobile IP technology is embedded in the functionality of packet equipment for 2.5G and 3G. In addition, mobile IP enables advanced applications such as unified messaging.

Additional Resources:-Books



Wednesday, September 19, 2007

Mobile Computing:-Introduction Part-II

Wireless Networks:-
-Two types
1)Voice network
Cellular systems (GSM, CDMA etc.)
2)Data network
WiFi, HiperLAN
-Networks are moving towards an integrated network
GPRS
Voice over WiFi

---Physical Layer (PHY)
-Binary (digital) data transmitted over airwave
-Requires antenna
-characterized by transmission range, power, modulation scheme, frequency range

--MAC Layer
-How wireless stations share the air medium and avoid contention to transmit data successfully
-“listen before you speak” or “speak at predetermined interval”
-Unique problems
Hidden node
Exposed node


--Network Layer
Responsible for facilitating multihop communication
Need to run some routing protocol
Traditional routing protocols may not work efficiently
Mobility at IP layer


--Transport Layer
Reliable Transport such as TCP may not work well in wireless medium
TCP inherently assumes that packet loss is due to congestion
Needs modification for wireless network

Mobile Computing:Introduction Part-I

Mobile Computing is a generic term describing the application of small,
portable, and wireless computing and communication devices.
This includes devices like laptops with wireless LAN technology,
mobile phones, wearable computers
and Personal Digital Assistants (PDAs) with Bluetooth or IRDA interfaces,
and USB flash drives.

Mobile computing IS wireless computing. This means connectivity with less or zero wires.

notebookInstead of using cables, there are wireless network devices may be installed to access the network as long as there are equipment (like small cell sites) around that can provide wireless network access/services. A student or faculty with a laptop can move from one table to another without worrying about power cables to stay connected. That's mobility!

Some cellular phones are like actual computer units. They may be used to prepare reports, view videos, listen to music, and even control the TV! Amazing, isn't it!?

There are other equipment/devices that we use nowadays that function as regular desktop computer units -- e.g., notebooks, tablet computers (an advanced design of a notebook but really still a notebook), handheld computer or personal digital assistants (PDAs).

tablet pcIn some tablet PCs, one can detach the screen and continue using it like a notepad. The HP tablet shown here is an example of a table PC.

Wireless technologies include wireless devices (e.g., cell phones) and wireless networks (e.g., cellular phone network).

* B = types and different uses of wireless technologies in the University and in education
* C = types and different uses of mobile computing in the University and in education
* B = C = type and difference uses of regular computer, cell phones, PDAs

tablet pc- for e-mail, research, communication, e-learning, report preparation, playing/simulation, program access, watching videos, reading, et al.

The main benefit of using mobile/wireless computing is that a computer could be brought anywhere inside classrooms, conference rooms and yes, even in bedrooms and restrooms, in airports and in malls. What many companies do is issue notebooks to their sales people so they can bring their presentation materials, prepare proposals, and access information anywhere (i.e., with the use of cellphones to access the Internet). The cellphone-notebook set-up is expensive though.

tablet pc monitorSince 2001, "access points" have been installed within the campus. Access points are devices that provide wireless network connection among computer units (i.e., not just the computer units commonly seen).

DLSU is currently purchasing more access points so we can have better coverage. We'll increase the quantity gradually.

Since 2001, we have been lending out network cards in the library, so students and faculty members can borrow them (like a book) and install them in their notebooks and access Internet practically anywhere in the school grounds. Like cell phones, there are blind spots (i.e., the network cannot be accessed because there are no signals). These places are outside the range of the access points. To date, only a small number of students borrow these units as they are not yet aware of this. However, the figures are steadily increasing.

The security of communication via wireless network access is a major concern. Like tools, wireless access devices can be put to bad use. Some wireless network signals can be used to gain unauthorized access to data/information. To hamper this, network traffic needs to be encrypted. Newer versions are capable of this.


Java Struts:ActionServlet,Action Class,ActionForm

ActionServlet:-The class org.apache.struts.action.ActionServlet is the called the ActionServlet. In the the Jakarta Struts Framework this class plays the role of controller. All the requests to the server goes through the controller. Controller is responsible for handling all the requests.
Action Class:-The Action is part of the controller. The purpose of Action Class is to translate the HttpServletRequest to the business logic. To use the Action, we need to Subclass and overwrite the execute() method. The ActionServlet (commad) passes the parameterized class to Action Form using the execute() method. There should be no database interactions in the action. The action should receive the request, call business objects (which then handle database, or interface with J2EE, etc) and then determine where to go next. Even better, the business objects could be handed to the action at runtime (IoC style) thus removing any dependencies on the model. The return type of the execute method is ActionForward which is used by the Struts Framework to forward the request to the file as per the value of the returned ActionForward object.
ActionForm:-
An ActionForm is a JavaBean that extends org.apache.struts.action.ActionForm. ActionForm maintains the session state for web application and the ActionForm object is automatically populated on the server side with data entered from a form on the client side.

Linux: How to reset forgotten root password

Sometimes, it may happen that you simply forget the root password. And its more frustrating if it is the only account on your system. What to do if this happens? Re-install the OS? No!!!
When one door closes, the other opens. We have another way to login to system.


There are various methods available for resetting a root password. The list includes booting into a single-user mode, booting using boot disk and edit the password file and mounting the drive on another computer and editing the password file.


In this post, I will list a simple yet useful method only. Others require a little more knowledge of OS-related operations and it may prove dangerous if you perform in a wrong way.


Reseting password by booting into single-user mode
This is the easiest and the fastest method to reset passwords. The steps are a little different depending on if you are using GRUB or LILO as a bootmanager.


For LILO
0) Reboot the system. When you see the LILO: prompt (see Fig. below), type in linux single and press 'Enter'. This will log you in as root in single-user mode. If your system requires you to enter your root password to log in, then try linux init=/bin/bash instead.


1) Once the system finishes booting, you will be logged in as root in single-user mode. Use passwd and choose a new password for root.


2) Type reboot to reboot the system and then you can login with the new password you just selected.


If you have a new version of LILO which gives you a menu selection of the various kernels available press Tab to get the LILO: prompt and then proceed as shown above.


For GRUB
0) Reboot the system, and when you are at the selection prompt (See Fig. below), highlight the line for Linux and press 'e'. You may only have 2 seconds to do this, so be quick.

1) This will take you to another screen where you should select the entry that begins with 'kernel' and press 'e' again.

2) Append ' single' to the end of that line (without the quotes). Make sure that there is a space between what's there and 'single'. If your system requires you to enter your root password to log into single-user mode, then append init=/bin/bash after 'single'. Hit 'Enter' to save the changes.

3) Press 'b' to boot into Single User Mode.

4) Once the system finishes booting, you will be logged in as root. Use passwd and choose a new password for root.

5) Type reboot to reboot the system, and you can login with the new password you just selected.




* Disclaimer *
Use the information in this document at your own risk. I completely deny any potential liability for the contents of this document. Use of the concepts, examples, and/or other content of this document is entirely at your own risk.
The information in this document should only be used to recover passwords from machines to which you have legal access. If you use this information to break into other people's systems, then I am not responsible for it and you deserve your fate when you are caught. So don't blame me.
You are strongly advised to make a backup of your system before performing any of the actions listed in this document.

Tuesday, September 18, 2007

Networking: How does Ping actually works?

Ping is a basic Internet program that most of us use daily, but did you ever stop to wonder how it really worked?

• As the ping program initializes, it opens a raw ICMP socket so that it can employ IP directly, circumventing TCP and UDP.
• Ping formats an ICMP type 8 message, an Echo Request, and sends it (using the “sendto” function) to the designated target address. The system provides the IP header and the data link layer envelope.
• As ICMP messages are received, ping has the opportunity to examine each packet to pick out those items that are of interest.
• The usual behavior is to siphon off ICMP type 0 messages, Echo Replies, which have an identification field value that matches the program PID.
• Ping uses the timestamp in the data area of the Echo Reply to calculate a round-trip time. It also reports the TTL from the IP header of the reply.
• When things do not work normally, ping may report some of the other ICMP message types that show up in the inbox. This includes things like Destination Unreachable and Time Exceeded messages.

D: Why D?


Continued from previous post...


Why, indeed. Who needs another programming language?
The software industry has come a long way since the C language was invented. Many new concepts were added to the language with C++, but backwards compatibility with C was maintained, including compatibility with nearly all the weaknesses of the original design. There have been many attempts to fix those weaknesses, but the compatibility issue frustrates it. Meanwhile, both C and C++ undergo a constant accretion of new features. These new features must be carefully fitted into the existing structure without requiring rewriting old code. The end result is very complicated - the C standard is nearly 500 pages, and the C++ standard is about 750 pages! C++ is a difficult and costly language to implement, resulting in implementation variations that make it frustrating to write fully portable C++ code.

C++ programmers tend to program in particular islands of the language, i.e. getting very proficient using certain features while avoiding other feature sets. While the code is usually portable from compiler to compiler, it can be hard to port it from programmer to programmer. A great strength of C++ is that it can support many radically different styles of programming - but in long term use, the overlapping and contradictory styles are a hindrance.
C++ implements things like resizable arrays and string concatenation as part of the standard library, not as part of the core language. Not being part of the core language has several
suboptimal consequences.

Can the power and capability of C++ be extracted, redesigned, and recast into a language that is simple, orthogonal, and practical? Can it all be put into a package that is easy for compiler writers to correctly implement, and which enables compilers to efficiently generate aggressively optimized code?

Modern compiler technology has progressed to the point where language features for the purpose of compensating for primitive compiler technology can be omitted. (An example of this would be the 'register' keyword in C, a more subtle example is the macro preprocessor in C.) We can rely on modern compiler optimization technology to not need language features necessary to get acceptable code quality out of primitive compilers.

Monday, September 17, 2007

Certifications: Top 10 IT Certifications

Best Hands-On Programs

Certifications in this category involve exams that not only test real-world skills and knowledge, but also demand that the test-takers demonstrate such skills and knowledge as a part of an exam or hands-on training. Such exams or programs are sometimes called “performance-based,” “practicum” or “laboratory” (lab) exams. Whatever name is used to identify these certifications, they all involve on-the-spot analysis and problem-solving and do their best to stage (or simulate) real-word system and hardware situations. Roll up your sleeves, and get your hands dirty while getting as close to a reality check as any certifications deliver today.

1. Cisco Certified Internetwork Expert (CCIE): With more than 10,000 CCIEs certified worldwide, this nonpareil credential includes a challenging, one-day lab exam that’s still widely regarded as the toughest certification exam around. Most CCIE candidates take the $1,250 lab exam—which also requires travel expenses for those who don’t live within driving distance of one of the 10 lab test centers around the globe—more than once to get certified. While neither cheap nor easy, the CCIE remains a valued prize as certifications go, which explains why it appears at or near the top of lists of the most desired or most valuable IT certifications.

2. Red Hat Certified Engineer (RHCE): The RHCE exams take an entire day and include about six hours worth of what the company calls performance-based exams—where candidates must install, configure or troubleshoot Red Hat servers and related network protocols and services. Highly regarded as representative of real-world situations and circumstances, these challenging exams also get high marks from certified professionals and their employers alike. The Red Hat Certified Technician (RHCT) exam is also performance-based and gets many of the same accolades. (It originally ranked as No. 4 in this list, but was dropped as a separate entry for brevity’s sake).

3. Novell Certified Directory Engineer (CDE): Novell calls the CDE exam a practicum, which requires logging into a carefully contrived and constructed set of networking components—servers, services and directories fully populated with users, groups, accounts, access controls and so forth—to analyze, design, configure, troubleshoot and repair the directories that make them work. Successful exam takers label the exam as demanding and intense, but also as an honest test of real-world knowledge and skills.

4. Oracle9i DBA Certified Professional (OCP): With the introduction of the Oracle9i DBA program, Oracle also now requires all candidates to complete an instructor-led hands-on course that involves significant real-world interaction and problem-solving, in addition to standard multiple-choice exams. This injects the kind of hands-on component needed to qualify for this list.

5. Oracle9i Database Administrator Certified Master (OCM): This credential requires a grueling two-day practicum exam administered at Oracle University locations. The exam’s still too new for a lot of intelligence to be available, but word is that it’s demanding, comprehensive and difficult.

6. Field Certified Systems Engineer (FCSE): Sponsored by the Field Certified Professional Association (FCPA), whose mission is to provide certifications based on the principles and practices of performance-based testing, the FCSE is available for Windows NT 4.0 and Windows 2000/XP environments, with numerous additional environments slated for coverage. Initial reports describe the credential as living up to its promise to identify individuals with real-world skills and knowledge appropriate for senior system engineering positions.

7. Field Certified Systems Administrator (FCSA): A more junior-level version of the FCSE, available for Windows NT 4.0, Windows 2000/XP and Cisco-based networking environments.

8. Field Certified PC Technician (FCPT): One of the Field Certified Help Desk Technician group of exams from the FCPA, this credential aims to identify individuals with real-world PC skills suitable for a bench technician, installer or help-desk professional. Numerous additional credentials in this general area are planned and should be worth watching.

9. Certified Professional Information Technology Consultant (CPITC): A certification from the Professional Standards Institute, an organization devoted to establishing performance-based credentials for all kinds of professionals, this credential covers a broad range of IT subject matter and must be supported with documentation and testing designed to measure real-world knowledge and expertise. The credential also carries hefty annual recertification requirements.

10. Cisco Career Certifications (Associate, Professional and Specialist): Although the various Cisco certifications beneath the CCIE do not include lab exams or practicums, they do make extensive use of simulation technology to include real-world problem-solving and to measure real-world skills as part (but not all) of the current exams relevant to these credentials. This makes them worthy of mention as the last item in this list.
See www.cisco.com/go/certification.

Technology : Dirty Secrets about working in IT

If you are preparing for a career in IT or are new to IT, many of the “dirty little secrets” listed below may surprise you because we don’t usually talk about them out loud. If you are an IT veteran, you’ve probably encountered most of these issues and have a few of your own to add — and please, by all means, take a moment to add them to the discussion. Most of these secrets are aimed at network administrators, IT managers, and desktop support professionals. This list is not aimed at developers and programmers — they have their own set of additional dirty little secrets — but some of these will apply to them as well.

10.) The pay in IT is good compared to many other professions, but since they pay you well, they often think they own you

Although the pay for IT professionals is not as great as it was before the dot-com flameout and the IT backlash in 2001-2002, IT workers still make very good money compared to many other professions (at least the ones that require only an associate’s or bachelor’s degree). And there is every reason to believe that IT pros will continue to be in demand in the coming decades, as technology continues to play a growing role in business and society. However, because IT professionals can be so expensive, some companies treat IT pros like they own them. If you have to answer a tech call at 9:00 PM because someone is working late, you hear, “That’s just part of the job.” If you need to work six hours on a Saturday to deploy a software update to avoid downtime during business hours, you get, “There’s no comp time for that since you’re on salary. That’s why we pay you the big bucks!”

9.) It will be your fault when users make silly errors

Some users will angrily snap at you when they are frustrated. They will yell, “What’s wrong with this thing?” or “This computer is NOT working!” or (my personal favorite), “What did you do to the computers?” In fact, the problem is that they accidentally deleted the Internet Explorer icon from the desktop, or unplugged the mouse from the back of the computer with their foot, or spilled their coffee on the keyboard.

8.) You will go from goat to hero and back again multiple times within any given day

When you miraculously fix something that had been keeping multiple employees from being able to work for the past 10 minutes — and they don’t realize how simple the fix really was — you will become the hero of the moment and everyone’s favorite employee. But they will conveniently forget about your hero anointment a few hours later when they have trouble printing because of a network slowdown — you will be enemy No. 1 at that moment. But if you show users a handy little Microsoft Outlook trick before the end of the day, you’ll soon return to hero status.

7.) Certifications won’t always help you become a better technologist, but they can help you land a better job or a pay raise

Headhunters and human resources departments love IT certifications. They make it easy to match up job candidates with job openings. They also make it easy for HR to screen candidates. You’ll hear a lot of veteran IT pros whine about techies who were hired based on certifications but who don’t have the experience to effectively do the job. They are often right. That has happened in plenty of places. But the fact is that certifications open up your career options. They show that you are organized and ambitious and have a desire to educate yourself and expand your skills. If you are an experienced IT pro and have certifications to match your experience, you will find yourself to be extremely marketable. Tech certifications are simply a way to prove your baseline knowledge and to market yourself as a professional. However, most of them are not a good indicator of how good you will be at the job.

6.) Your nontechnical co-workers will use you as personal tech support for their home PCs

Your co-workers (in addition to your friends, family, and neighbors) will view you as their personal tech support department for their home PCs and home networks. They will e-mail you, call you, and/or stop by your office to talk about how to deal with the virus that took over their home PC or the wireless router that stopped working after the last power outage and to ask you how to put their photos and videos on the Web so their grandparents in Iowa can view them. Some of them might even ask you if they can bring their home PC to the office for you to fix it. The polite ones will offer to pay you, but some of them will just hope or expect you can help them for free. Helping these folks can be very rewarding, but you have to be careful about where to draw the line and know when to decline.

5.) Vendors and consultants will take all the credit when things work well and will blame you when things go wrong

Working with IT consultants is an important part of the job and can be one of the more challenging things to
manage. Consultants bring niche expertise to help you deploy specialized systems, and when everything works right, it’s a great partnership. But you have to be careful. When things go wrong, some consultants will try to push the blame off on you by arguing that their solution works great everywhere else so it must be a problem with the local IT infrastructure. Conversely, when a project is wildly successful, there are consultants who will try to take all of the credit and ignore the substantial work you did to customize and implement the solution for your company.

4.) You’ll spend far more time babysitting old technologies than implementing new ones

One of the most attractive things about working in IT is the idea that we’ll get to play with the latest cutting edge technologies. However, that’s not usually the case in most IT jobs. The truth is that IT professionals typically spend far more time maintaining, babysitting, and nursing established technologies than implementing new ones. Even IT consultants, who work with more of the latest and greatest technologies, still tend to work primarily with established, proven solutions rather than the real cutting edge stuff.

3.) Veteran IT professionals are often the biggest roadblock to implementing new technologies

A lot of companies could implement more cutting edge stuff than they do. There are plenty of times when upgrading or replacing software or infrastructure can potentially save money and/or increase productivity and profitability. However, it’s often the case that one of the largest roadblocks to migrating to new technologies is not budget constraints or management objections; it’s the veteran techies in the IT department. Once they have something up and running, they are reluctant to change it. This can be a good thing because their jobs depend on keeping the infrastructure stable, but they also use that as an excuse to not spend the time to learn new things or stretch themselves in new directions. They get lazy, complacent, and self-satisfied.

2.) Some IT professionals deploy technologies that do more to consolidate their own power than to help the business

Another subtle but blameworthy thing that some IT professionals do is select and implement technologies based on how well those technologies make the business dependent on the IT pros to run them, rather than which ones are truly best for the business itself. For example, IT pros might select a solution that requires specialized skills to maintain instead of a more turnkey solution. Or an IT manager might have more of a Linux/UNIX background and so chooses a Linux-based solution over a Windows solution, even though the Windows solution is a better business decision (or, vice versa, a Windows admin might bypass a Linux-based appliance, for example). There are often excuses and justifications given for this type of behavior, but most of them are disingenuous.

1.) IT pros frequently use jargon to confuse nontechnical business managers and hide the fact that they screwed up

All IT pros — even the very best — screw things up once in a while. This is a profession where a lot is at stake and the systems that are being managed are complex and often difficult to integrate. However, not all IT pros are good at admitting when they make a mistake. Many of them take advantage of the fact that business managers (and even some high-level technical managers) don’t have a good understanding of technology, and so the techies will use jargon to confuse them (and cover up the truth) when explaining why a problem or an outage occurred. For example, to tell a business manager why a financial application went down for three hours, the techie might say, “We had a blue screen of death on the SQL Server that runs that app. Damn Microsoft!” What the techie would fail to mention was that the BSOD was caused by a driver update he applied to the server without first testing it on a staging machine.

(Courtesy: TechRepublic)

Windows: How to Use Windows Notepad as a Professional Diary


Use the following VERY easy steps to use Windows Notepad as your own diary, complete with a stamped date & time!

Step 0
First, open a new, blank Notepad file.

Step 1
Second, write .LOG as the first line of the file, and press ENTER. Save the Notepad file and then close it. Note: You must type .LOG in capital letters!

Step 2
Now relaunch the file. Notice how each time you open it, a new time/date entry is neatly placed on the body of the file. Each entry will appear below the previous one. Now you can take your notes more conveniently and organized. Hows trick?
Note: the word .LOG must be in CAPITAL LETTERS.

Sunday, September 16, 2007

D: What is D?

D is a general purpose systems and applications programming language. It is a higher level language than C++, but retains the ability to write high performance code and interface directly with the operating system API's and with hardware. D is well suited to writing medium to large scale million line programs with teams of developers. D is easy to learn, provides many capabilities to aid the programmer, and is well suited to aggressive compiler optimization technology.

D is not a scripting language, nor an interpreted language. It doesn't come with a VM, a religion, or an overriding philosophy. It's a practical language for practical programmers who need to get the job done quickly, reliably, and leave behind maintainable, easy to understand code.

D is the culmination of decades of experience implementing compilers for many diverse languages, and attempting to construct large projects using those languages. D draws inspiration from those other languages (most especially C++) and tempers it with experience and real world practicality.

Friday, September 14, 2007

C: How to use memcpy function in C

The memcpy function in C++ copies the specified number of bytes of data from the specified source to the specified destination. This is a binary copy so the underlying data type is irrelevant. The following steps will help you use the memcpy function.

Step 0
Learn the syntax of memcpy in C++. The complete syntax is void *memcpy (void *destination, const void *source, size_t num);. Note that this function always copies num bytes and does not look for a terminating character in order to be as efficient as possible. Memcpy returns the destination array.

Step 1
Know that the pointers to the source and destination arrays are type-cast to a type of void. The size of the destination and source arrays should be at least num bytes to avoid overflows, although this is not required. Memmove should be considered as a safer approach if the source and destination overlap.

Step 2
Understand that the C++ memcpy function is kept in the cstring library. You may need to include the string.h header file to use memcpy.

Step 3
Look at the following complete program for some simple examples of how to use memcpy:


#include_<_stdio.h_>
#include_<_string.h_>
int main ()
{
__char string1[]="test string";
__char string2[80];
__memcpy (string2,string1,strlen(string1)+1);
__printf ("string1: %s\nstring2: %s\n",string1,string2);
__memcpy (string1,"",1);
__printf ("string1: %s\n",string1);
__return 0;
}

Observe the following output for this program:
string1: test string
string2: test string
string1:
The first use of memcpy copies the contents of string1 to the contents of string2. The second use of memcpy clears the contents of string1 by moving the null terminator character to the first position of string1.

Note: Please DO NOT copy-paste the code given in this post.