Monday, June 18, 2012

Wireless Communication Technologies: that connect the entire world without limit.



Md. Bahadur Ali
GM (Finance & Accounts)


Network technology plays a significant role in the science and business area. Scientists innovate and develop some new technologies to fit businesses’ needs and to satisfy people’s demands. The first generation, 1G wireless mobile communication systems, was introduced in the early 1980s and completed in the early 1990s. 1G wireless was analog and supported the first generation of analog cell phones with the speeds up to 2.4kbps. The second generation, 2G system, fielded in the late 1980s and finished in the late 1990s, was planned mainly for voice transmission with digital signal and the speeds up to 64kbps. The third generation, 3G wireless system, was developed in the late 1990s and might be well-done in the late 2000s. 3G is not only provided the transmission speeds from 125kbps to 2Mbps, but also included many services, such as global roaming, superior voice quality and data always add-on. The fourth generation (4G) is a conceptual framework and a discussion point to address future needs of a high seed wireless network that can transmit multimedia and data to and interface with wire-line backbone network perfectly just raised in 2002. The speeds of 4G can theoretically be promised up to 1Gbps. The beyond will be 5G with incredible transmission speed with no limitation for access and zone size.

The main distinguishing factors between 3G and 4G will be data rates, services, transmission ways, access technology to the internet, the compatibility to interface with wire-line backbone network, quality of service and security. 4G should support at least 100 Mbps peak rates in full-mobility wide area coverage and 1Gbps in low-mobility local area coverage. The speeds of 3G can be up to 2Mbps, which is much slower than the speeds of 4G. For the service, 3G marketing is difficult to roam globally and interoperate across networks, yet 4G will be a global standard that provides global mobility and service portability so that service provider will not longer be limited by single-system. In order words, 4G should be able to provided very smooth global roaming ubiquitously with lower cost. Furthermore, 3G is based on a wide-area concept applying circuit and packet switching for transmission with limited access technology, such as WCDMA, CDMA and TD-SDMA. However, the 4G standard will base on broadband IP-based packet switching method of transmission with seamlessly access convergence. It means that 4G integrated all access technologies, services and applications can unlimitedly be run through wireless backbone over wire-line backbone using IP address. In the other words, 4G will bring us almost perfect real world wireless or called “wwww: World Wide Wireless Web.
Comparison among technologies (Generations)
Generation
Definition
Throughput/
Speed
Technology
Time period
Features
1G
Analog
14.4Kbps (peak)
AMPS, NMT, TACS
1970-1980
During 1G Wireless phones are used for voice only
2G
Digital Narrow band circuit data
9.6/14.4 Kbps
TDMA, CDMA
1990-2000
2G capabilities are achieved by allowing multiple users on a single channel via multiplexing. During 2G Cellular phones are used for data also along with voice.
2.5G
Packet Data
171.2 Kbps (peak) 20-40 Kbps
GPRS
2001-2004
In 2.5G the internet becomes popular and data becomes more relevant. 2.5G multimedia services and streaming starts to show growth.
3G
Digital Broadband Packet Data
3.1 Mbps (peak) 500-700 Kbps
CDMA 2000 (1xRTT, EVDO) UMTS, EDGE
2004-2005
3G has Multimedia services support along with streaming are more popular. In 3G, Universal access and portability across different device types are made possible. (Telephones, PDA,s, etc.
3.5G
Packet Data
14.4 Mbps (peak) 1-3 Mbps
HSPA
2006-2010
3.5G supports higher throughput and speeds to support higher data needs of the consumers.
4G
Digital Broadband Packet All IP Very high throughput
100-300 Mbps (peak) 3-5 Mbps 100 Mbps (Wi-Fi)
WiMax
LTE
Wi-Fi
Now (Read more on Transitioning to 4G)
Speeds for 4G are further increased to keep up with data access demand used by various services. High definition streaming is now supported in 4G.
5G
Not Yet
Probably gigabits
Not Yet
Soon (probably 2020)
Currently there is no 5G technology deployed. When this becomes available it will provide very high speeds to the consumers.

How 4G works:
In the 4G wireless networks, each node will be assigned a 4G-IP address (based on IPv6), which will be formed by a permanent “home IP address and a dynamic” care-of address that represents its actual location. When a device (computer) in the Internet wants to communicate with another device (cell phone) in the wireless network, the computer will send a packet to the 4G-IP address of the cell phone targeting on its home address. Then a directory server on the cell phone’s home network will forward this packet to the cell phone’s care-of address through a tunnel, mobile IP; moreover, the directory server will also inform the computer that the cell phone’s care-of address (real location), so next packets can be sent to the cell phone directly. The idea is that the 4G-IP address (IPv6) can carry more information than the IP address (IPv4) that we use right now. IPv6 means Internet Protocol Version 6 including 128 bits, which is 4 times more than 32bits IP address in IPv4. 32 bits IP address looks like this 216.37.129.9 or 11011000.00100101.10000001.00001001 (32bits). However, the IP address in IPv6 version will be 4 times of IPv4; it looks like 216.37.129.9, 79.23.178.229, 65.198.2.10, 192.168.5.120. It includes 4 sets of IPv4 address defined in different functions and usages. In previous example for the case, the first set of the IP address (216.37.129.9) cab be defined to be the “home address purpose. It just likes the normal IP address that we use for addressing in the Internet and network. The second set of the IP address (79.23.178.229) can be declared as the “care-of address. It is the address set up for the communication from cell phones to computers. After these addresses from cell and PC established a link, care-of address will instead of home address; it means that communication channel will switch from the first set to the set to the second set of the IPv6 address. The third set of the IP address (65.198.2.10) can be signed as a tunnel (mobile IP address). It is the communication channel to wire-line network and wireless network. An agent, a directory server, between the cell phones and PC will use this mobile IP address to establish a channel to cell phones. Then, the last set of IP address (192.168.5.120) can be local network address for virtual private network (VPN) sharing purpose. In this rich data IP address, software can use them to distinguish different services and to communicate and combine with other network areas, such as computer (PC) and cell phones network in the case of the example. In addition, the table bellow is a basic comparison of IPv6 and IPv4 showing that how IPv6 richer than IPv4 in data containing capacity. Moreover, in 4G wireless network, not only has it IPv6 transmission protocol, but also be supported by OFDM, MC-CDMA, LAS-CDMA, UWB and Network-LMDS.






OFDM stands for Orthogonal Frequency Division Multiplexing, transmitting large amounts of digital data over a radio wave. OFDM works by splitting the radio signal into multiple smaller sub-signals that are then transmitted simultaneously at different frequencies to the receiver. In the other words, OFDM is a digital modulation technology in which in one time symbol waveform, more than thousands of orthogonal waves are multiplexed for increasing signal strength. This is good for high bandwidth digital data transition. In OFDM, two wireless devices will establish a connection tunnel before they start their communication. Therefore, after making a connection between a certain target, the radio signal will split into many smaller sub-signals with accurate direction to the target.

MS-CDMA stands for Multi-Carrier Code Division Multiple Access, which is actually OFDM with a CDMA overlay. The users are multiplexed with orthogonal codes to distinguish users in MS-CDMA and single-carrier CDMA systems. It allows flexible system design between cellular system and signal cell system. However, in MC-CDMA, each user can be allocated several codes, where the data is spread in time or frequency.

LAS-CDMA, Large Area Synchronized Code Division Multiple Access, is developed by LinkAir Communication, a patented 4G wireless technology. “Las-CDMA enables high-speed data and increases voice capacity and the latest innovative solution, Code-Division Duplex (CDD), merges the highly spectral efficient LAS-CDMA technology with the superior data transmission characteristics of Time-Division Duplex (TDD) This resulting combination makes CDD to be the most spectrally efficient, high-capacity duplex system available today. In the 4G area, LAS-CDMA is played as a global transmission protocol (“World Cell) as showing in the following picture, Zone size. It means that if the distance is too for to two wireless devices, they have to use this protocol with IPv6 to establish their connection.

In 4G technologies, UWB radio can help solve the multi-path fading issues by using very short electrical pulses to across all frequencies at once. However, UWB can only be used indoor or underground because of its low-power requirement. Thus, UWB has to be used with OFDM, which can transmit large among of digital data with multi-path algorithm; OFDM running outdoor, UWB running indoor to ensure signal strength purpose. In the 4G wireless technology, UWB will be played as “Pico Cell of very limited distance in the buildings.”







The Network-LMDS, Local Multipoint distribution system, is the broadband wireless technology used to carry voice, data Internet and video services in 25GHz and higher spectrum. Its broadcast method consisted simultaneous voice, data, Internet, and video traffic can be the solution of signal fading issue in local area. Therefore, Network-LMDS can be played as Micro Cell and Micro Cell in the 4G technology to be the main transmission protocol for the wireless devices, showing as the picture below.

The idea of the complementation of IPv6, OFDM, MC-CDMA, LAS-CDMA, UWB and Network-LMDS can be arranged in different zone size. IPv6 can be designed for running in the all area because it is basic protocol for address issue LAS-CDMA can be designed for the global area as zone 1, world cell. OFDM and MC-CDMA can be designed for running in the wide area (zone 3), called Macro cell. Network-LMDS in Zone 2, Micro cell and UWB is in Zone 1, Pico cell. Based on above transmission protocol, we knew that each of them has its drawback(s) in somewhere; although the complementation with all of them, it is not perfect yet go implement 4G’s great idea. Academic research and experiments are still required for further developing of 4G in the following few to 10 years.

Conclusion:
Nowadays, wireless technology is getting popular and important in the network and the Internet field. In this paper, I briefly introduced the history background of 1G to 5G, compared the differences of 3G and 4G, and illustrated how 4G may work for more convenient and powerful in the future. 4G just right started from 2002 and there are many standards and technologies, which are still in developing process. Therefore, no one can really sure what the future 4G will look like and what services it will offer to people. However, we can get the general idea about 4G from academic research; 4G is the evolution based on 3G’s limitation and it will fulfill the idea of wwww. World Wide Wireless Web, offering more services and smooth global roaming with inexpensive cost.


References:
-           Internet
-           Mr. Mirza Ahmed Hussain,  Ex Dir. BTTB.


Saturday, June 9, 2012

Dense Wavelength Division Multiplexing (100 G Solution) adopted in SMW-5 Submarine Cable System - What is in the Laboratory for Higher Bandwidth Solution?


Md. Monwar Hossain *
Parvez M. Ashraf **

Introduction

Ever since the internet from the research lab of the scientists and engineers came into the people’s world during the 1990’s, there had been exponential growth of demand for bandwidth. The society and the economy started to be shaped by the wide spread use of internet. Today internet has put its spell on people with numerous features such as browsing, emailing, blogging, twitting, facebooking, online gaming, conferencing, audio/video streaming, internet TV, etc. Having access to high speed (or, in other term, high capacity) triple play (voice, data and video) communication for various business and non-business oriented activities has become the norm of today’s students, educators, researchers, professionals and other people.
So meeting the rapidly increasing demand for capacity in the global and national information superhighways is a great challenge as ever, making the telecommunication technology today to go through major innovations and developments to meet up the capacity requirements in the core communication systems and networks. As we have indicated in a Teletech article last year, the exponential growth of Bandwidth demand has made the 10 and 40 G technology of optical networking insufficient to meet future needs.
The SMW-4 Consortium’s Submarine Cable system was originally built with 10 G systems but it is now being upgraded with 40 G and 100 G technologies. BSCCL has been a member of the SMW-4 and also became a party of the newly formed SMW-5 Submarine cable consortium with a bid to join Bangladesh to a second submarine cable. SMW-5 cable has been planned based on the 100 G technology, which is now more matured and has become the standard since more than a couple of years. We hope that by the year 2014 Bangladesh will be able to take advantage of a technologically advanced system made althrough by using the 100 G technology.

2. Status of 100 G Technology

Fig. 1: A comparison between the Constellation points at different modulation schemes at bit rate 46 Gbps (aprrox. 40 Gbps)

The optical line terminal equipments of the present and near future need to be able to handle very high speed traffic transported to a long distance. Because of the notable technical

Fig. 2: A depiction of BER vs OSNR at different modulation schemes at bit rate 46 Gbps (aprrox. 40 Gbps)

developments on the DWDM components, it can be said that DWDM approaches have surpassed the time division multiplexing (TDM) for the high speed transmission over long distance which can be even on a single fiber instead of a pair of fibers for transmission and reception with a specific terminal. The economic and technical challenges associated with achieving a 100G transmission solution has been overcome within the last 4 years. 100Gbps error free transmission has been demonstrated in 2008 by companies such as the Nortel (now, Ciena), at the Optical Fiber Conference/National Fiber Optical Engineer Conference. The key breakthrough factor of the solution has been the coherent receiver. For the past three decades or so, optical system receivers have been working by detection of the transmitted signal’s intensity with on-off keying.
A coherent receiver operates by mixing a local oscillator and the incoming signal to be received. If the local oscillator is tuned into the frequency of the incoming signal, then only the information from the incoming signal is extracted, and neighboring channel information is ignored, thus unwanted signal elimination became much better. Most vendors basically applied this method towards solving optical transmission challenges at higher line rates.
The bit-rate of a channel can be described as the simple product of the baud rate or symbol-rate, bits per symbol and the number of carriers used. Recent commercial coherent systems at 40 Gbps and 100 Gbps have exploited all of these dimensions. The technology that made this

Fig. 3: Coherent 100 G System

100 G transmission possible is Dual Polarization QPSK modulation (DP-QPSK) with a coherent receiver. Modulation is required to ensure propagation, to perform multiple accesses and to enhance the SNR, as well as to achieve bandwidth compression. DP-QPSK modulation technique would decrease the baud or symbol rate of the system, using four bits per symbol, keeping the optical spectrum four times narrower than the unreduced baud rate. Because of the capability to pass through multiple Optical Add-Drop Multiplexers (OADMs) and its practical PMD (Polarization Mode Dispersion) tolerance, DP-QPSK is recognized as a viable format for deployment within 50GHz-spaced systems.

3. SEA-ME-WE-5

Fig. 4: Proposed Route Diagram of SMW-5 (Bangladesh will join through a branch cable with the main cable)

Existing SMW-4 cable is the only submarine cable that has kept Bangladesh connected with the international information superhighway. Due to any calamity or other reasons, if this cable get into any kind of physical damage or disruption, country’s international long distance telecommunication would suffer badly. That’s why Bangladesh has been working for long to acquire a second submarine cable so that the international links can be maintained without outage. In this sequence of efforts, Bangladesh established contact with a new consortium,
SEA-ME-WE-5, and already signed a MoU (Memorandum of Understanding) with the Consortium. Initially, there will be 16 parties in the Consortium. The submarine cable this time will extend from Japan up to London for a total of 25000 Km. The estimated cost for joining this project is 48 million USD for Bangladesh. However, the cost will be reduced to 38 million USD if Myanmar joins and shares the branch cable with Bangladesh. Bangladesh might get 17 lambda of 100 Gbps capacity for each altogether coming to 1700 Gbps. Upto now, the Landing Station of the second submarine cable has been planned to be in Mongla of Bagerhat district. The physical infrastructure of the Landing Station is expected to be built by 2012-2013. It is expected that by the end of 2014, the process of joining of Bangladesh with a second submarine cable will be completed.

Fig. 5: System Configuration Diagram (proposed) of SMW-5

4. Beyond 100 G Technology: Coherent Systems, Super channels or Optical OFDM might be the solutions
In the recent years, due to the new developments in polarization multiplexed phase modulated DWDM transmission over long distance, optical coherent detection, sophisticated DSP (Digital Signal Processing) and high performance ICs (application specific integrated circuits or ASICs), the transceiver equipment for optical transmission is emerging with high
level of capacity & sophistication. Specially, coherent detection has made possible to choose among wide range of modulation formats, such as the use of dual polarization or multiple sub-carriers. Also, use of digital signal processing techniques for leveling out various linear and nonlinear impairments has become viable with coherent detection. It has been practically found that the coherent systems can provide robust tolerance against unwanted transient signals. Therefore, the future high speed and high performance transmission systems are expected to be based on coherent systems. Optical coherent systems are likely to bring future optical transmission systems at 200 G, 400 G, and later 1 (one) Terra or 1000 G systems.
DWDM is considered as an important technique enabling multiple optical carriers to travel in coexistence in parallel through a fiber that facilitates more efficient use of the expensive fibers over thousands of Kilometers. The present “state of the art” for DWDM in 2012 or 2013 may be still 100 Gbps. However, the growth in the internet has created requirement for new scale for bandwidth and that is preferably without adding any more complexity to the operations. It is clear that for a high capacity network beyond the 100G, in addition to a move toward larger, more powerful transport switches, the mechanisms of DWDM optical transmission may have to change.

Fig. 6: Bandwidth Virtualization with Super-channels

A new approach to DWDM capacity, the super-channel could be an effective solution to the challenges posed by today’s internet growth. In simple terms, the super-channel is an evolution in DWDM in which several optical carriers are combined to create a composite line side signal of the desired capacity, and which is provisioned in one operational cycle. It could be more practical to combine multiple carriers into a super-channel to move beyond 100 Gbps than it is to simply increase the data rate of an individual carrier. However super-channels are indistinguishable from a single carrier channel of the same data rate as long as normal end
users are concerned. Similar to CPU multi core processing the concept of super channels resemble to Bandwidth virtualization through multi-carrier techniques. DWDM super channels have the potential to offer an ideal solution to the problems of increasing optical transport capacity beyond 100 Gbps, up to 1 (one) Terra bps. This will also provide reduction in complexity with electronic circuitry by using large scale PICs (Photonic Integrated Circuit).

Monday, June 4, 2012

Cyber war- A Threat to Business




Md. Shainur Rahman, AGM, DTR (west), SBN, BTCL


Man’s regular attachment with information technology has made the world very much busy. Man has brought physical distance almost to zero level with IT. The day is not far away when man would control the whole world sitting in his room. Man has already utilized IT in his utility services so greatly but some exceptional incidents are taking place now a days. Recently the war called cyber war has spread allover the world so widely that it can rightly be called a “Silent psychological war”. It is true that the cyber war has got no blood shedding but it has produced a negative effect on the people. As for example, communication system is being suspended, online transactions are being disrupted by it. The idea of cyber war has been discussed recently in Bangladesh greatly in the media. By the hacking of Bangladeshi and Indian hackers on some websites the fear has spread far & wide. Cyber hacking or attack in the technologically advanced countries is a well known matter. The buzzing media news of recent time shows that China has acquired a big step on cyber war and it has made the American forces fall at a risk. Tension on Taiwan or South China Sea may bring extra pressure on American forces due to the cyber power of China. However, the  hackers have not yet attacked on someone heavily but they are capable to attack seriously if they wish. More over, Criminal hackers already have acquired great capability on hacking the website of the world computers. Hence we are to be very much cautious about the terrible nature of hacking or cyber war at present and how to overcome the deadly attack of cyber war should be our main concern of today.




What is cyber war


Cyber war is an Internet based conflict involving politically motivated attacks on information and information systems. Cyber war attacks can disable official websites and networks, disrupt or disable essential services, steal or alter classified data and cripple financial systems like many other possibilities. The initiator of cyber war can be an individual, an organization or a government. There are many different kinds of cyber war, from specialized hacking jobs on a specific server to generally targetted denial of service attacks. The ultimate target in cyber war is an attack that completely removes the ability for all of the members of an organization or government to be connected to the Internet; in the modern information centric society, this can lead to the loss of millions or billions of dollars of productivity or worse.


Types of attack:


There are many different kinds of Cyber war attacks-
§  Vandalism
§  Propaganda
§  Denial of Service
§  Network Attacks Against Infrastructure
§  Non-Network Attacks Against Infrastructure.


Vandalism:

Web Vandalism is characterized by Website defacement and/or denial of service attacks. Website defacement is a major threat to many internet enabled businesses. It negatively affects the public image of the Company. Companies may suffer from loss of customers.


Propaganda:


Propaganda is deliberate collection of messages intended to influence the opinions and actions of large numbers of people. The information provided in these messages is not done so impartially or necessarily truthfully, as the basic purpose of propaganda is to influence the audience towards the side of the propagandist. Propaganda is a powerful recruiting tool. The web provides a way in which propaganda can be quickly and cheaply disseminated. The cost of publishing propaganda may simply be a web hosting fee. Through the use of the web’s Video & file-sharing sites along with social networking sites, propaganda can reach large audiences in a very short manner of time.


Denial of Service:


A Denial of Service attack is an attempt to consume all of an available resource in order to keep that resource from its intended users. The denial of service attack is one of the most common attacks on the Internet. Its use is so widespread because it is relatively easy to implement and it is very difficult to defend against. Generally, an attacker creates a flood of bogus requests to a service, ignoring the results. The server is bogged down by the large numbers of incoming requests, taking a long time to handle both the fraudulent requests and any  legitimate requests that come in during the attack. In extreme cases, the server will not be able to handle the strain of the incoming connections and will crash, permanently breaking the server until it is manually restarted. A denial of service attack may also consist of a request which is crafted to exploit a specific Vulnerability in the server, causing it to crash without requiring a large number of requests. 


Network Based attacks Against Infrastructure:


As in conventional war, critical infrastructure serves as a target to cyber attacks. Although often regarded as the most severe type of cyber attack that includes power, water, fuel, communications and transportation, few critical infrastructure attacks have been perpetrated to this day. Previously, it was thought that the worst a network based attack could do was denial of service. As recently as this year however, hackers were able to inflict physical damage on machinery. Electrical power, water and fuel supplies are at the core of a country’s infrastructure. The disruption of any of these services would have a chain reaction effect and cause severe repercussions. For efficiency and cost saving purposes, the control systems of power plants, water pump stations and fuel lines have been networked and can be controlled remotely. This opens the possibility of an attacker gaining access and taking control.


Non-Network Based Attacks Against Infrastructure:


Equipment disruption can also occur from non-computerized attacks. An Electromagnetic Pulse (EMP) occurs after a nuclear device is detonated and disables all electronic devices within range. However EMPs can also be generated without a nuclear explosion. Non-nuclear EMPs can be loaded in cruise missiles or as the payload of bombs and cause widespread equipment failure, as shown in the figure below.







Defense Mechanism


The threat of cyber war is different from common Internet threats and most organizations are not adequately prepared for it. Corporate defenses typically concentrate on protecting data from theft or alteration. Cyber war also seeks to disrupt critical infrastructure and services .New technologies such as cloud computing, social networking and the proliferation of mobile devices have also resulted in an increase of cyber attacks. These factors are expected to drive the demand for cyber security programs. As the cyber war has a great negative impact in our life so we should try to protect our system from the attack of cyber war. The defense mechanisms of cyber war are as given below.


Cyber war defenses: 


Internet based attacks are becoming more sophisticated all the time. Cyber war threats warrant composite security defenses comprised of preventive, detective and corrective controls. A successful defense strategy focuses on identifying critical information and services and implementing layered controls to protect them.
Sound business practices are founded on the principle of action, not reaction. That means security programs must be highly proactive in safeguarding sensitive data and critical services, which means: fixing vulnerabilities hidden from auditors; raising awareness of issues that exist because of politics or organizational gaps and working collaboratively to address them and preventing compensating controls from being cited inappropriately. The layered controls specified by best practices and applicable regulations are necessary to maintain a strong security posture.


Network breach prevention: 
Defining a network security perimeter can be difficult in a large enterprises, but there are a number of best practices that can help to start by documenting networks and systems at each site. Next, Internet service providers (ISP) should be contacted and available IP address range should be determined. After obtaining proper permissions, each IP range during a maintenance window should be scanned. The scan results for vulnerabilities and rogue system should be carefully examined. Finally, each IP range should be monitored and alerts should be configured if an unused IP address comes into use. It needs to ensure that all external network access points are controlled through the use of firewalls and encrypted virtual private networks (VPNs).


Monitoring and hardening: Cyber warriors may be very stealthy and conduct custom attacks over weeks or months. Intrusion Detection Systems (IDS) softwares have to tune appropriately to prevent cyber attacks. A content filtering solutions have to implement to detect unauthorized use of sensitive information and prevent it from leaving the network.


Availability: Availability isn’t just a matter of business continuity or disaster recovery. Systems must also be available when under attack. One should prepare for network DOS attacks by implementing intrusion prevention systems (IPS) to counter attacks in real-time and configure operating systems to discard DOS traffic.


Government strength controls: Cyber war threats require government strength controls to protect confidential information such as trade secrets. Implementing an air gap or physical separator should be considered to protect sensitive networks. This is an absolute way to prevent data leaks across networks. Most information security professionals agree that a determined attacker will penetrate perimeter defenses. The principle of defense- in-depth is founded on that assumption.


Knowing and exploiting enemyTo be successful in fending off cyber attacks, it is necessary to understand how the opposition think and anticipate their next move. Cyber warriors are professionals and utilize traditional warfare strategy and tactics.
The table shows the attack examples and defenses in brief-
Attack target
Goal of attack
Attack examples
Defenses
End-system
Data access and modification
Hacking, phishing, espionage etc.
Virus scanner, firewall, network intrusion detection system etc.
Denial-of-service
Denial-of-service attack via botnets etc.
Control plane
Data access and modification
Malicious route announcement, DNS cache poisoning etc.
Secure routing protocols (with cryptographic authentication), secure DNS (DNSSEC). etc
Denial-of- service
DNS recursion attack etc
Data plane
Data access and modification
Eavesdropping, man-in-the-middle attack etc.
Secure network protocols (IPSec, TLS) etc



Every new technology has got its merits and demerits. Information Technology is no exception to this. Despite, its numerous merits, advantages and abilities it is not free from a negative side called cyber attack or cyber war done by the immoral hackers all over the world. It is therefore imperative to study cyber attack and spread moral teaching collectively by the world body through implementing anti-hacking laws. 





Md. Shahinur Rahaman
Assistant General Manager
DTR(West), Sher-e-Bangla Nagar, Dhaka.