From times immemorial, when the mankind had its existence, the data is being used, the data which is collection of raw facts, figures etc. is used at every instance of time, one way or the other. To use the data, it has to be processed, so that we can derive meaningful conclusions. This processed Data is known as Information.
The data has to be stored, retrieved, modified and maintained. Traditionally, the data was maintained manually, stored in files, (Paper and other material based) updated, retrieved as when required manually. This posed lot of extent after the computers of records i.e., files using computers. These files are collection of related Data and Records. This emerged the new methods of database management. This was the beginning of a new revolution, which can be termed as “Database Revolution”.
Traditional File Management
Traditionally, the computer based data management involved into storing data in the form of files, which can be accessed, manipulated, updated, and worked with using software, thus it reduced the cumbersome work of manually maintained files. This Computer Based File Management envisioned as paperless environment. Depending upon the storage and access strategy the files can be classified as under:
Sequential Access File, Random Access File, Indexed-Sequential Access File
Sequential Access File
These files are those, where the data (records) are stored one after other and while access is done, they are accessed one after the other in a sequence. These files are the simplest of all types and easy to understand. If data stored is voluminous, then there may be a problem to search specific records, as it is a time consuming process.
These files stored as series of bytes and can be accessed in the same sequence as they are stored. These files are appropriate if the file is small, for larger files this is not advisable.
Random Access File
These are the files, in which any records can be accessed directly, without following the sequence as they are storage of records. These files appropriate for the Storage of Voluminous Record, as the retrieval of specific records becomes easy, as the record is directly accessed. This method of file organisation, is usually bit expensive but it pays off the faster access.
Indexed-Sequential Access File
As nay index of a book, helps the reader the giving exact location of specific topics, so is this file, where every record has an index, which tells the computer where a record is found. Usually a specific field in a file is use create an index and while accessing the file, the index is used to locate the record, which is very fast. This is an Efficient File Organisation Method, which is very fast and search of records becomes an easy task.
While processing data, computer are usually used and the software are available, which follow various processing techniques, which depend on the need and type of data. The following are some of the commonly used processing techniques.
This technique of processing data involves, waiting for some data to gather and process the gathered data as a batch. The gathered data is then fed into the system for processing. Usually, time interval is fixed for collecting the data, it may be at end of the day, week, fortnightly, monthly etc.
This method is used when the data has to be gathered, then only it has to b e processed. For example, the transactions of daily operations have to be recorded and then processed together, at specific time intervals, like at end of day etc. this method is advantages at certain jobs but at other times it is disadvantages as it causes Unnecessary Delay in Processing of Jobsas well the data is not current, if the request is made between time intervals fixed for transaction processing.
This way of processing involves, processing of transactions as and when they occur, without waiting for the transactions to gather, then process them as in case of Batch Processing. This method removes the disadvantages of the batch process, as the transaction need not wait this removing delays and as well the data is up date.
Usually, the data is entered from various terminals which are directly connect to the CPU. As and when the transactions occurs it is recorded by respective terminal and immediately to the CPU For Processing. Thus, the loss of data as well the delays are avoided.
Real-Time Transaction Processing
This is one of the methods which uses Online Processing Technique. Here, as soon as the data is generated, it is processed and the files/databases-are immediately updated, so that the time delays are removed. This has an Online Interactive Processing Environment, which gives the results of the processing immediately, so that one can take decisions easily.
These real-time processing systems are quite helpful and widely used in areas like satellite launches, rocket firing, air, traffic control, airline ticketing, Railway Reservation Systemsetc., where the data has to show an immediate effect, where it cannot be delayed for processing as well the output also has to be immediate.
In this method of processing, the data is not directly keyed into the CPU, the data is usually keyed into Tapes or Magnetic Disks, from a terminal, which is later read into the CPU for processing. This is adopted as the time taken for Input-Output Operations are slow when compared to processing operations, therefore the CPU is idle for the time when it is waiting for input. Apart from this data entered is validated and formatted, so that the processing becomes easier, this method aims at the optimum utilization of the CPU.
Usually a minicomputer is used to control this Offline Processing and later the data is fed into the main computer.
Limitations of File Management System
As file management becomes a very difficult task when the number of records increases and this traditional file poses problems in searching and retrieving data.
The file management system where the Data Processing is done by designing files specially to suit different applications. With the growing number of complexities, this system demanded large number of files, which made it more difficult to work with them. From the organisational (Business) point of view, where different functional units (departments) share huge volumes data which may lead into following problems:
Redundancy & Inconsistency
As Files designed for each application are usually unique, which usually gives rise to repetition of data in the files, i.e. redundancy. This leads to inconsistency in data, i.e., copies of the same data in different files may contain different information, which is a great problem.
Data Integrity Lost: As the data is Redundant and Inconsistent this leads to loss of data integrity as the data items are from different files, the integration of the data poses problem.
Isolation: As the data is scattered in various files and usually it is very difficult for the Application Programs to retrieve the required data because of the isolation.
Access: It is very difficult to access data as it spread throughout. Whenever there is a new request that is not anticipated, the retrieval and access to the data becomes difficult.
Lack of Security: As the file accessed in totality, the security becomes difficult as one cannot usually restrict access to part of the data, the access is usually the whole file, therefore the data is insecure, which causes a great setback to the use of computerized File Management System.
Concurrent Update: Many of these file systems work in a multi-user environment, where many users access and update the data simultaneously, which usually creates inconsistent in data if proper measures are not followed. This flinders a multi-user environment for data access and updation with all the above problems, file management existed, which are now solved by Database Management Systems (DBMS).