Showing posts with label Lupine Publishers Journals. Show all posts
Showing posts with label Lupine Publishers Journals. Show all posts

Tuesday, 26 November 2019

Lupine publishers | Mitigating Disaster using Secure Threshold-Cloud Architecture





Lupine Publishers | Current Trends in Computer Sciences & Applications


Abstract

There are many risks in moving data into public cloud environments, along with an increasing threat around large-scale data leakage during cloud outages. This work aims to apply secret sharing methods as used in cryptography to create shares of cryptographic key, disperse and recover the key when needed in a multi-cloud environment. It also aims to prove that the combination of secret sharing scheme and multi-clouds can be used to provide a new direction in disaster management by using it to mitigate cloud outages rather than current designs of recovery after the outages. Experiments were performed using ten different cloud services providers at share policies of 2 from 5, 3 from 5, 4 from 5, 4 from 10, 6 from 10 and 8 from 10 for which at different times of cloud outages key recovery were still possible and even faster compared to normal situations. All the same, key recovery was impossible when the number of cloud outages exceeded secret sharing defined threshold. To ameliorate this scenario, we opined a resilient system using the concept of self-organization as proposed by Nojoumian et al in 2012 in improving resource availability but with some modifications to the original concept. The proposed architecture is as presented in our Poster: Improving Resilience in Multi-Cloud Architecture.
Keywords: Secret Shares; Disaster Mitigation; Thresholds Scheme; Cloud Service Providers

Introduction

With the introduction of cloud services for disaster management on a scalable rate, there appears to be the needed succour by small business owners to get a cheaper and more secure disaster recovery mechanism to provide business continuity and remain competitive with other large businesses. But that is not to be so, as cloud outages became a nightmare. Recent statistics by Ponemon Institute [1] on Cost of Data Centre Outages, shows an increasing rate of 38% from $505,502 in 2010 to $740,357 as at January 2016. Using activity-based costing they were able to capture direct and indirect cost to: Damage to mission-critical data; Impact of downtime on organizational productivity; Damages to equipment and other assets and so on. The statistics were derived from 63 data centres based in the United States of America. These events may have encouraged the adoption of multi-cloud services so as to divert customers traffic in the event of cloud outage. Some finegrained proposed solutions on these are focused on Redundancy and Backup such as: Local Backup by [2]; Geographical Redundancy and Backup [3]; The use of Inter-Private Cloud Storage [4]; Resource Management for data recovery in storage clouds [5], and so on. But in all these, cloud service providers see disaster recovery as a way of getting the system back online and making data available after a service disruption, and not on contending disaster by providing robustness that is capable of mitigating shocks and losses resulting from these disasters.
This work aims to apply secret sharing methods as used in cryptography [6,7] to create shares of cryptographic key, disperse and recover the key when needed in a multi-cloud environment. It also aims to prove that the combination of secret sharing scheme and multi-clouds can be used to provide a new direction in disaster management by using it to mitigate cloud outages rather than current deigns of recovery after the outages. Experiments were performed using ten different cloud services providers for storage services, which at different times of cloud outages, key recovery were still possible and even faster compared to normal situations. All the same, key recovery was impossible when the number of cloud outages exceeded secret sharing defined threshold. To ameliorate this scenario, we look forward to employ the concept of self-organisation as proposed by Nojoumian et al. [8] in improving resource availability but with some modifications as proposed. The rest of the work is organised into section II, Literature Review takes a closer look at current practices, use of secret sharing and cloudbased disaster recovery with much interest in the method used in design. III. Presents our approach, in section IV, present Results and Evaluations and Conclude in section V with future works and lessons learnt.

Literature Review

There are research solutions based on different variants of secret sharing schemes and multi-cloud architecture that give credence to its resilience in the face of failures, data security in keyless manner, such as: Ukwandu et al. [9] — RESCUE: Resilient Secret Sharing Cloud-based Architecture; Alsolami & Boult, [10], — CloudStash: Using Secret-Sharing Scheme to Secure Data, Not Keys, in Multi-Clouds. Others are: Fabian et al. [11] on Collaborative and secure sharing of healthcare data in multi-clouds and [12] on Secret Sharing for Health Data in Multi-Provider Clouds. While RESCUE provided an architecture for a resilient cloud-based storage with keyless data security capabilities using secret sharing scheme for data splitting, storage and recovery, Cloud Stash also relied on the above strengths to prove security of data using secret sharing schemes in a multi-cloud environment and Fabian et al proved resilience and robust sharing in the use of secret sharing scheme in a multi-cloud environment for data sharing. Because our approach is combining secret sharing and multi-clouds in developing a clouddisaster management the need therefore arise to review current method used in cloud-based disaster in a multi-cloud system and their shortcomings.
a) Remus: Cully et al. [13] described a system that provides software resilience in the face of hardware failure (VMs) in such a manner that an active system at such a time can continue execution on an alternative physical host while preserving the host configurations by using speculative execution. The strength lies on the preservation of system’s software independently during hardware failure.
b) Second Site: As proposed by Rajagopalan et al. [14] is built to extend the Remus high-availability system based on virtualization infrastructure by allowing very large VMs to be replicated across many data centres over the networks using internet. One main aim of this solution is to increase the availability of VMs across networks. Like every other DR systems discussed above, Second Site is not focused on contending downtime and security of data during cloud outages.
c) DR-Cloud: Yu et al. [15] relied on data backup and restore technology to build a system proposed to provide high data reliability, low backup cost and short recovery time using multiple optimisation scheduling as strategies. The system is built of multicloud architecture using Cumulus [16] as cloud storage interface. Thus providing the need for further studies on the elimination of system downtime during disaster, provide consistent data availability as there is no provision for such in this work.

Our Approach

Our approach is in combining secret sharing scheme with multi-clouds to achieve resilience with the aim of applying same in redefining cloud-based disaster management from recovery from cloud outages to mitigating cloud outages.

The Architecture

The architecture of as shown in Figure 1 shows key share creation, dispersal and storage, while that of Figures 2 & 3 is of shares retrieval and key recovery
Figure 1: Key Share Creation, Dispersal and Storage.
Figure 2: Share Retrievals and Key Recovery.
Figure 3: Cloud Service Providers at Different Scenarios.
Share creation and Secret recovery: The diagram above explains our design of key share creation, dispersal and storage using different cloud service providers (Figure 1). Share Creation: The dealer determines the number of hosts shares combination from which data recovery is possible known as threshold (t) and the degree of the polynomial, drived from subtracting 1 from the threshold. In this case, the threshold is 3 and the degree of polynomial is 2. He initiates a secret sharing scheme by generating the polynomial, the coefficients a and b are random values and c is the secret, the constant term of the polynomial as well as the intercept of the graph. He generates 5 shares for all the hosts H1… H5 and sends the shares to them for in an equal ratio and weights we, and thereafter leaves the scene [1].
Secret Recovery: Just as in Shamir [6] authorised participants following earlier stated rules are able to recover the secret using Lagrangian interpolation once the condition as stated earlier is met. The participants contribute their shares to recover the secret.

Results and Evaluations

Test: Cloud Outages against Normal situations. This test assumes that cloud outage prevents secret recovery.

Discussions

The results above show that cloud outage has no negative effect on key recovery, rather reduces the overhead in comparison with normal situations. It shows the relationship between cloud outage and normal operational conditions. From available results at twenty percent (20%) failure rate using 3 from 5 share policy, the system becomes faster by sixteen percent (16.41%), but at forty percent (40%) failure rate using same share policy, the download speed is faster by a little above fifty one percent (51.80%). Looking at a higher share policy of 6 from 10, at thirty percent (30%) failure rate, the system download speed is higher by a little above thirtyseven percent (37.90%), while at forty percent (40%) failure rate, the system performed better by about forty-three percent (42.99%). The implications therefore are that in as much as failure rate is not equivalent or above the threshold, system performance improves as there was no result obtained when the cloud outage exceeds or equal to threshold. These therefore do not support the assumption as above that cloud outage has negative effect in key recovery. There is no significant evidence to show that the size of the share has effect on the key recovery during cloud outages because at forty percent (40%) failure rate using share of 10KB in 3 from 5 shows performance rate of above fifty-one percent while in 6 from 10 share policy approximately forty-three (42.99%) percent performance rate.

Conclusions, Lessons Learnt and Future Work

Current cloud-based disaster recovery systems have focused on faster recovery after an outage and the underlying issue has been the method applied, which centered in data backup and replicating the backed-up data to several hosts. This method has proved some major delays in providing a strong failover protection as there has to be a switch from one end to another during disaster in order to bring systems back online, the need thus arises for research to focus on method capable of mitigating this interruption by providing strong failover protection as well as stability during adverse failures to keep systems running. This method we have provided here using this paper. Because, secret sharing schemes are keyless method of encryption, data at rest and in transit are safe as it exists in meaningless format.
The recovery of key is done using system memory and share verification is usually carried out using an inbuilt share checksum mechanism using SHA-512, which validates shares before recovery. Else, share recovery returns error and halts. We have learnt that cloud outage rather than prevent key recovery, using our method proved that it hastens key recovery from results available. Also, understand that when cloud outage exceeds threshold of the share policy, key recovery becomes impossible and to ameliorate this situation, we propose as future work to use the concept of Self- Organization as proposed by Nojoumian et al. [8] to manage cloud resources though with some modifications so as to maintain share availability from cloud service providers.

For more Lupine Publishers Open Access Journals Please visit our website:
http://lupinepublishers.us/
For more Current Trends in Computer Sciences & Applications Please Click Here:
https://lupinepublishers.com/computer-science-journal/

To Know More About Open Access Publishers Please Click on Lupine Publishers

 Follow on Linkedin : https://www.linkedin.com/company/lupinepublishers
Follow on Twitter   :  https://twitter.com/lupine_online


Friday, 19 July 2019

Lupine Publishers | Current Trends in Computer Sciences & Application

Computer Science Research Journals | Lupine Publishers

Detecting Distributed Denial-of-Service DDoS Attacks

Abstract

Since the number of damage cases resulting from distributed denial-of-service (DDoS) attacks has recently been increasing, the need for agile detection and appropriate response mechanisms against DDoS attacks has also been increasing. The latest DDoS attack has the property of swift propagation speed and various attack patterns. There is therefore a need to create a lighter mechanism to detect and respond to such new and transformed types of attacks with greater speed. In a wireless network system, the security is a main concern for a user.

Introduction

Security of information is of utmost importance to organization striving to survive in a competitive marketplace. Network security has been an issue since computer networks became prevalent, most especially now that internet is changing the face of computing. As dependency on Internet increases on daily basis for business transaction, so also is cyber-attacks by intruder who exploit flaws in Internet architecture, protocol, operating systems and application software to carry out their nefarious activities Such hosts can be compromised within a short time to run arbitrary and potentially malicious attack code transported in a worm or virus or injected through installed backdoors. Distributed denial of service (DDoS) use such poorly secured hosts as attack platform and cause degradation and interruption of Internet services, which result in major financial losses, especially if commercial servers are affected (Duberdorfer, 2004).

Related Works

Brignoli et al. [1] proposed DDoS detection based on traffic selfsimilarity estimation, this approach is a relatively new approach which is built on the notion that undisturbed network traffic displays fractal like properties. These fractal-like properties are known to degrade in presence of abnormal traffic conditions like DDoS. Detection is possible by observing the changes in the level of self-similarity in the traffic flow at the target of the attack. Existing literature assumes that DDoS traffic lacks the self-similar properties of undisturbed traffic. The researcher shows how existing bot- nets could be used to generate a self-similar traffic flow and thus break such assumptions. Streilien et al,2005. Worked on detection of DoS attacks through the polling of Remote Monitoring (RMON) capable devices. The researchers developed a detection algorithm for simulated flood-based DoS attacks that achieves a high detection rate and low false alarm rate.
Yeonhee Lee [2] focused on a scalability issue of the anomaly detection and introduced a Hadoop based DDoS detection scheme to detect multiple attacks from a huge volume of traffic. Different from other single host-based approaches trying to enhance memory efficiency or to customize process complexity, our method leverages Hadoop to solve the scalability issue by parallel data processing. From experiments, we show that a simple counterbased DDoS attack detection method could be easily implemented in Hadoop and shows its performance gain of using multiple nodes in parallel. It is expected that a signature-based approach could be well suited with Hadoop. However, we need to tackle a problem to develop a real time defense system, because the current Hadoop is oriented to batch processing.

Proposed System Architecture of Intrusion Detection Based on Association Rule

The structure of the proposed architecture for real time detection of Dos instruction detection via association rule mining, it is divided into two phases: learning and testing. The network sniffer processed the tcpdump binary into standard format putting into learning, during the learning phase, duplicate records as well as columns with single same data were expunge from the record so as to reduce operational. Another table Hashmap was created by the classification model to keep track of the count of various likely classmark that can match the current read network traffic, this table will be discarded once the classmark with highest count had been selected. Depicted in Table 1 is the Association rule classifier algorithm (Tables 2-4).
Table 1: Association Rule Mining Classifier Algorithm.
Lupinepublishers-openaccess-computer-sciences-journal
Table 2: Sample Rules.
Lupinepublishers-openaccess-computer-sciences-journal
Table 3: Sampled number combination table.
Lupinepublishers-openaccess-computer-sciences-journal
Table 4: Sample Network Traffic Data.
Lupinepublishers-openaccess-computer-sciences-journal

System Implementation

This chapter presents implementation of Association rule classifier model, documentation of the designed system and the user interfaces. The software and hardware requirement needed for the system and also the testing of the system for verification and validation of functions, as well as the result [3-10].

Interface Design

Start Page: This page is the first page that is seen when the application is executed. The option button enables the use of already generated rules to be reused for classification of another file. If the check box is unselected, the option of selecting a folder containing already generated rule is enabled. Furthermore, select the source file and then the new file to classify. The page is as shown below (Figure 1).
Figure 1: Start page.
Lupinepublishers-openaccess-computer-sciences-journal
Creating Folder: The first step is to create a folder where the rules will be saved, and also the numbers for generating the rules will be saved. This operation is allowed only if the check box on the form is checked, if there is an error with the folder creation process, an error message will be displayed. The open button for the file name is enabled if the folder is created successfully. The interface for the creation of folder is shown below (Figure 2).
Figure 2: Setting folder name.
Lupinepublishers-openaccess-computer-sciences-journal
Source File: The source of data for generating the rule is the next requirement. The open button closes to the Mine button enable you to specify the file containing the data from which the rules will be generated. Before the rules are generated from the file, the size of the file is calculated to obtain the number of combination(arrangement) required to generate the rule. The selected source file is seen below (Figure 3).
Figure 3: Locating source file.
Lupinepublishers-openaccess-computer-sciences-journal
a) Mining File: The selected source file is mined to extract the rules needed for classification. Depending on the size of the file, it could take a while to complete. On completion, a message dialog is displayed, as shown below (Figure 4).
Figure 4: Mining source file.
Lupinepublishers-openaccess-computer-sciences-journal
b) File Classification: After obtaining all the rules from the source file, the open button for selecting a data file to classify is enabled. A file can be classified based on the rules generated. The selected file can be obtained using the open button, as shown below (Figure 5).
Figure 5: Locating file to classify.
Lupinepublishers-openaccess-computer-sciences-journal
c) Generate Result: Click on the start button to begin the generation of output or result from the selected file to classify, based on the rules generated. The output or result is saved in the folder called result within the folder specified above. Depending on the size of the file to classify, the output might take a while. On completion of the classification, a dialog appears to signify the completion of the classification. This is shown in Figure 6.
Figure 6: Generating result file from selected file based on rules obtained.
Lupinepublishers-openaccess-computer-sciences-journal
d) Exit Program: Click the Exit button to exit or terminate the application (Figure 7).
Figure 7: Exit program.
Lupinepublishers-openaccess-computer-sciences-journal

Experimental Setup and Results

The training dataset consisted of 37,079 records, among which there are 99(.266%) teardrop, 36,944 (99.66%) smurf, 20(0.05%) pod, 15(0.04%) neptume and 1(0.026%) land connections. The training dataset use for testing is made up of 400 records out of which there are 98(24.5%) teardrop, 266(66.5%) smurf, 2 (5%) pod, 15(3.75%) neptume and 1 (0.025%) land while the test dataset is made up of 300 records out of there are 40 (13.3%) Pod, 107 (35.6%) smurf, 9(3%) teardrop, 43(14.3%) neptune (14.3%), 9(3%) land, 33(11%) apache2, 21(7%) normal, 25(8.3%) mailbomb and 8(2.6%) snmpgetattack [11-20].
The test and training data are not from the same probability distribution. In each connection are 20 attributes out of 41 attributes describing different features of the connection (excluding the label attribute)
The experiment with association rule classification are divided into two major phases, in the first phase, rules were generated for each, (and combinations of attribute) of the traffic network dataset using the two association rule indices; Confidence and support, in the second phase, the rule generated in the first phase were then pruned to remove irrelevant rules so as to improve the classification process, the pruning process include;
i. Removal of all rules with confidence less than 50%
ii. Removal of all duplicate rules
iii. All identical rules pointing to difference attacks were also removed
iv. All one attribute rules were not considered for classification
Both the initial rules generate, and pruned rules were then used to classify the training set as well as testing data set.

Results

Tables 5-14 shows the confusion matrix obtained Association rule mining with 20 attributes.
Table 5: Confusion matrix obtained from one attribute combination from test dataset (unprune rules).
Lupinepublishers-openaccess-computer-sciences-journal
Table 6: Confusion matrix obtained from one attribute combination from test dataset (prune rules).
Lupinepublishers-openaccess-computer-sciences-journal
Table 7: Confusion matrix obtained from one and two attribute combination from test dataset (unprune rules).
Lupinepublishers-openaccess-computer-sciences-journal
Table 8: Confusion matrix obtained from one and two attribute combination from test dataset (prune rules).
Lupinepublishers-openaccess-computer-sciences-journal
Table 9: Confusion matrix obtained from one, two and three attribute combination from test dataset (unprune rules).
Lupinepublishers-openaccess-computer-sciences-journal
Table 10: Confusion matrix obtained from one, two and three attribute combination from test dataset (prune rules).
Lupinepublishers-openaccess-computer-sciences-journal
Table 12: Confusion matrix obtained from one, two, three and four attribute combination from test dataset (prune rules).
Lupinepublishers-openaccess-computer-sciences-journal
Table 13: Confusion matrix obtained from one, two, three, four and five attribute combination from test dataset (unprune rules).
Lupinepublishers-openaccess-computer-sciences-journal
Table 14: Confusion matrix obtained from one, two, three, four and five attribute combination from test dataset (prune rules).
Lupinepublishers-openaccess-computer-sciences-journal

Discussion

The results in Tables 15-19, were obtained from classification of training data set with raw unprune rule set, from the tables, the degree of accuracy of classification of smurf attack ranges between 99.6% to 100%. Pod attacks classification could not be classified correct by the classification model, about 95% pod attacks were classified as smurf attacks, while the rest were classified as Pod. 98% of teardrop and 100% of land attacks were also correctly classify, while less than 20% of Neptune attacks were classified correctly using rules based on 0ne, two and three combinational attributes, rules based on four and five combinational attributes has better performance of classification of Neptune attacks (65%) than one – three attributes combination. Table 20 shows the summary of all the attacks correctly classified in Tables 15-19.
Table 15: Confusion matrix obtained from one attribute combination from training dataset (unprune rules).
Lupinepublishers-openaccess-computer-sciences-journal
Table 16: Confusion matrix obtained from one and two attribute combination from training dataset (unprune rules).
Lupinepublishers-openaccess-computer-sciences-journal
Table 17: Confusion matrix obtained from one, two and three attribute combination from training dataset (unprune rules).
Lupinepublishers-openaccess-computer-sciences-journal
Table 18: Confusion matrix obtained from one, two, three and four attributes combination from training dataset (unprune rules).
Lupinepublishers-openaccess-computer-sciences-journal
Table 19: Confusion matrix obtained from one, two, three, four and five attributes combination from training dataset (unprune rules).
Lupinepublishers-openaccess-computer-sciences-journal
Table 20: Percentages of Correctly Classified Attacks in Table 15-19.
Lupinepublishers-openaccess-computer-sciences-journal
Table 21: Confusion matrix obtained from one attribute combination from training dataset (prune rules).
Lupinepublishers-openaccess-computer-sciences-journal
Table 22: Confusion matrix obtained from one and attribute combination from training dataset (prune rules).
Lupinepublishers-openaccess-computer-sciences-journal
Table 23: Confusion matrix obtained from one, two and three attribute combination from training dataset (prune rules).
Lupinepublishers-openaccess-computer-sciences-journal
Table 24: Confusion matrix obtained from one, two, three and four attributes combination from training dataset (prune rules).
Lupinepublishers-openaccess-computer-sciences-journal
Table 25: Confusion matrix obtained from one, two, three, four and five attributes combination from training dataset (prune rules).
Lupinepublishers-openaccess-computer-sciences-journal
Table 26: Percentages of Correctly Classified Attacks in Table 21-25 (prune rule).
Lupinepublishers-openaccess-computer-sciences-journal
The results in Tables 21-25, were obtained from classification of training data set with pruned rule set, the pruned rule set gives a better results than the unpruned data set. All the attacks types except Neptune and pod were correctly classified (100%) for all the rules categories, pod was 90% correctly classifier with single attributes rules and 100% correctly classified with other four categories of rules. Neptune recorded 100% correct classification for 4 and 5 attributes combinational rules, 93% correct classification for 2 and 3 attributes combinational rules and 69% correctly classified for one attributes rules. Table 26 shows the summary of all the attacks correctly classified in Tables 21-25.

Implementation with Test Data

The association rule classifier was tested with test data that did not belong to the same network with the training dataset, there are three (3) (Appache, Mail bomb, Snmpget attacks in the test data that were not present in the training set. (Figures 1-7) shows the confusion matrix table obtained from the association rule classification of the test data

Discussion

Table 27: Confusion matrix obtained from one attribute combination from test dataset (unprune rules).
Lupinepublishers-openaccess-computer-sciences-journal
Table 28: Confusion matrix obtained from one and two attribute combination from test dataset (unprune rules).
Lupinepublishers-openaccess-computer-sciences-journal
Table 29: Confusion matrix obtained from one, two and three attribute combination from test dataset (unprune rules).
Lupinepublishers-openaccess-computer-sciences-journal
Table 30: Confusion matrix obtained from one, two, three and four attribute combination from test dataset (unprune rules).
Lupinepublishers-openaccess-computer-sciences-journal
Table 31: Confusion matrix obtained from one, two, three, four and five attribute combination from test dataset (unprune rules).
Lupinepublishers-openaccess-computer-sciences-journal
The results in Tables 27-31 were obtained from classification of test data set with the raw unpruned rules. From the tables, Pod attacks were classified Teardrop and smurf attacks. Smurf and Teardrop attacks were 100% and 88% classified correctly respectively, all Neptume attacks were classified as Land attacks, all Land attacks were correctly classified, between 77% and 90% of Normal traffic were classified correctly. 94% and 6% of Appache attack were classified as Land and Neptume attack respectively. Smnpget attacks were classified as smurf and Teardrop attacks.
Table 32: Confusion matrix obtained from one attribute combination from test dataset (prune rules).
Lupinepublishers-openaccess-computer-sciences-journal
Table 33: Confusion matrix obtained from one and two attribute combination from test dataset (prune rules).
Lupinepublishers-openaccess-computer-sciences-journal
Table 34: Confusion matrix obtained from one, two and three attribute combination from test dataset (prune rules).
Lupinepublishers-openaccess-computer-sciences-journal
Table 35: Confusion matrix obtained from one, two, three and four attribute combination from test dataset (prune rules).
Lupinepublishers-openaccess-computer-sciences-journal
Table 36: Confusion matrix obtained from one, two, three, four and five attribute combination from test dataset (prune rules).
Lupinepublishers-openaccess-computer-sciences-journal
The results in Tables 32-36 were obtained from classification of test data set with the raw pruned rules. 3,4, and 5 attributes rules classified Pod, Smurf Teardrop, Neptume and Land attacks correctly. Appache, mailbomb and snmpget attacks were classified as either Unknown, Smurf or Teardrop attacks. Table 37 shows the summary of all the correctly classified attacks. All the attacks present in the test dataset which were not used for training of the Association rule classifier were classified as attacks with unprune rule, the pruned rule classified these attacks as unknown attacks. Tables 38 & 39 below shows how they were classified [21-46].
Table 37: Summary Correctly Classsified Attacks from the Test Dataset.
Lupinepublishers-openaccess-computer-sciences-journal
Table 38: Classification of Attacks not Present in the Test Data (unprune Rule).
Lupinepublishers-openaccess-computer-sciences-journal

Conclusion

The need for effective and efficient security on our system cannot be over-emphasized. This position is strengthened by the degree of human dependency on computer systems and the electronic superhighway (Internet) which grows in size and complexity on daily basis for business transactions, source of information or research. Association Rule methods of improving intrusion detection systems based on machine learning techniques were described and implemented on Intel Duo-core, CPU 2.88GHz, 1024MB RAM using Java programming language.
The work is motivated by increasing growth in importance of intrusion detection to the emerging information society. The research work provided background detail on intrusion detection techniques, and briefly described intrusion detection systems. In this research, an Association rule-based algorithm, was newly developed for mining known known-patterns. The results of the developed tools are satisfactory though it can be improved upon. These tools will go a long way in alleviating the problems of security of data by detecting security breaches on computer system.
For more Lupine Publishers Open Access Journals Please visit our website: www.lupinepublishersgroup.com/

Wishing you a Magical and Blissful Holiday

 Take a leap of faith and begin this wondrous new year by believing. Happy New Year!