Monday, 25 May 2020

SAN Zoning


SAN zoning is a method of arranging Fibre Channel devices into logical groups over the physical configuration of the fabric.
SAN zoning may be utilized to implement compartmentalization of data for security purposes.
Each device in a SAN may be placed into multiple zones.

Hard and Soft Zoning

Hard zoning is zoning which is implemented in hardware. Soft zoning is zoning which is implemented in software.
Hard zoning physically blocks access to a zone from any device outside of the zone.
Soft zoning uses filtering implemented in fibre channel switches to prevent ports from being seen from outside of their assigned zones. The security vulnerability in soft zoning is that the ports are still accessible if the user in another zone correctly guesses the fibre channel address.

WWN Zoning

WWN zoning uses name servers in the switches to either allow or block access to particular World Wide Names (WWNs) in the fabric.
A major advantage of WWN zoning is the ability to recable the fabric without having to redo the zone information.
WWN zoning is susceptible to unauthorized access, as the zone can be bypassed if an attacker is able to spoof the World Wide Name of an authorized HBA.

Port Zoning

Port zoning utilizes physical ports to define security zones. A users access to data is determined by what physical port he or she is connected to.
With port zoning, zone information must be updated every time a user changes switch ports. In addition, port zoning does not allow zones to overlap.
Port zoning is normally implemented using hard zoning, but could also be implemented using soft zoning.

Wednesday, 20 May 2020

SAN (Storage Area Network)


A SAN (Storage Area Network) is a network specifically dedicated to the task of transporting data for storage and retrieval. SAN architectures are alternatives to storing data on disks directly attached to servers or on Network Attached Storage (NAS) devices that are connected through general purpose networks.
In order to meet the storage system’s demands, enterprises apply SAN to increase the system efficiency and capacity expansion. According to SNIA (Storage Networking Industry Association), SAN is:
  1. SAN’s purpose is to transmit data between storage systems or storage systems and client servers. The SAN fabric contains physical connections from storage systems to client, then storage management devices, servers, and network devices. However, SAN is usually defined as block I/O services provider.
  2. The storage system contains storage components, devices, computer equipments, software applications, and network devices.
SAN can attach to various storage devices such as disk-array subsystems, CD towers, magnetic tape drivers and libraries, and provides data I/O services via hub or switches through network connections.


SAN (Storage Area Network) Protocols

Storage Area Networks are traditionally connected over Fibre Channel networks. Storage Area Networks have also been built using SCSI (Small Computer System Interface) technology. An Ethernet network that was dedicated solely to storage purposes would also qualify as a SAN.
Internet Small Computer Systems Interface (iSCSI) is an SCSI variant that encapsulates SCSI data in TCP packets and transits them over IP networks.
Fibre Channel over TCP/IP (FCIP) tunnels Fibre Channel over IP-based networks.
The Internet Fibre Channel Protocol (iFCP) transports Fibre Channel Layer 4 FCP on IP networks.

Advantages of SAN

By integrating storage devices, SAN increases storage space usability and cost efficiency.
  • SAN is the high speed storage sharing system.
  • SAN increases the network bandwidth and reliability of data I/O.
  • SAN is separated from the regular network system and has an ability to expand the storage capacity.
  • SAN reduces storage management cost since it simplifies the system fabric and devices management.

Thursday, 14 May 2020

How Does Cloud Computing Work?


Cloud computing refers to the delivery of one or many services powered by computer hardware and software over the Internet or local computer network. For most major service providers, there are a large number of computers configured in a grid to provide the service architecture located in the “cloud.” The origin of the name cloud comes from the original representation of the Internet with the cloud symbol and in systems diagrams that use a cloud symbol to represent the complex underlying infrastructure. As services in this domain become more prevalent, a common question that arises amongst computer users is how does cloud computing work?

What is Cloud Computing?

Cloud computing refers to the concept of sharing software, resources, and information via a network connection such as the Internet. In a cloud structure, the cloud servers save the end-users information, data, and can serve the service application(s) as well reducing the need for storage space on client computers. End-users have the freedom to access information wherever one can obtain an Internet connection and do not have to worry about upgrading service applications with the latest version to be released. This allows users to edit files such as PowerPoint presentations, or documents on mobile devices or tablets without having to have specified software installed on the computing device being used while away from home or the office. Cloud computing services can either be targeted to consumer, small business, or enterprise use depending on the nature of the service(s) being provided.

What are the Cloud Computing Services Available to Consumers?

There are a number of services available via cloud computing technology in the marketplace today. Cloud services designed to provide consumer services will normally provide access through a compatible web browser or light-weight desktop or mobile application or app. The information saved by the user is saved on the remote servers in the cloud. All cloud computer services rely on sharing resources to help achieve economies of scale similar to an electric grid servicing a community. Common cloud services provided by industry today are:

API as a service (APIaaS)
Back-end as a service (BaaS)
Data as a service (DaaS)
Database as a service (DBaaS)
Platform as a Service (PaaS):
Infrastructure as a service (IaaS)
Storage as a service (STaaS)
Software as a service (SaaS)
Network as a service (NaaS)
Security as a service (SECaaS)
Test environment as a service (TEaaS)
Desktop virtualization

What Are the Advantages of Cloud Computing?


Even though cloud computing is relatively new, it has a number of advantages when compared to traditional software acquisition and deployment ranging from home use to the enterprise.

Accessible Anywhere.
When using a cloud computing-based service, information can be accessed anywhere in the world that the end-user can obtain an Internet connection. For those who travel in conjunction with work, this allows employees to do work while away from the office to include the home or on a business trip without having to install software on personal computers. It also reduces the need to transport documents home on disk or memory stick to use away from the office. Most cloud word processing services such as Google Docs (recently re-branded as Google Drive) and Microsoft Office 365 allow co-workers to edit documents at the same time to further increase efficiency.

Increased Storage.
Prior to the commercialization of cloud computing, consumers were storage limited by the size of the hard drive on the computer being used to do work. If running low on memory, you would have to either upgrade the computer’s hard drive or purchase an external hard drive or USB stick to save information. Most cloud computing services offer more storage than is commonly found on client computers allowing end-users to save money by not worrying about storage space on the local computer.

Quick Set-Up.
Most cloud computing services can be setup faster than it takes to install a new software application on a client computer. The registration process typically consists of setting up payment, account settings such as a login and password, and confirming one’s registration in the email account used to sign-up for the service. Once finished, the cloud computing service can be used without additional set-up.

Software is Automatically Updated.
One of the largest recurring costs for organizations is keeping up with software updates and subsequent deployment throughout the enterprise. By making the shift to cloud computing, updates become the responsibility of the service provider. In most cases, updates do not result in a loss of service availability for the end-user saving both time and money. In the event of service upgrades that require changes in format or other non-routine changes on the consumer’s end, the provider will normally provide amply warning to allow the client to make the required changes before making the shift to the newer software.

Cost is Reduced.
Cloud computing services typically cost much less money up-front than purchasing traditional software seat licenses. The costing model varies by service provider, but typically charge on a monthly or yearly subscription model. When selecting a plan with no contract, services can be terminated at any time allowing companies to only pay for cloud computing apps when needed.

What are the Disadvantages of Cloud Computing?


Increased Security Risks.
For the security-paranoid, trusting one’s data to a service provider just isn’t an option. When making the shift to cloud computing, there is a certain level of trust required on the part of the consumer or organization making use of the service. Organizations that have adopted cloud computing services prioritize the savings in cost over the potential of a security breach of the company or organization’s information. Some cloud services do come with SSL, email encryption, and spam filtering in addition to application-layer security options such as data encryption and password protection. Despite these options, it comes down to an organization’s priorities on whether or not the potential security risks posed by cloud computing make it worthwhile to make the switch from client-side application deployment.

Control over Software Updates.
Although organizations do not have to invest time and money in making software upgrades when leveraging cloud services, they do lose control over the timing of upgrades. In some cases, this can result in a loss of productivity on the client-side in addition to unplanned costs to upgrade data to meet requirements of the new software. To mitigate this concern, proper research into software upgrade policies and notification requirements on the part of the service provider are a must before signing up for any cloud computing service.

Cloud Service Provider Reliability.
Even Amazon isn’t immune to the occasional outage. When leveraging cloud services, organization’s are relying on both the service provider and the organization’s computers being able to access the Internet. Depending on the service being used, it may or may not provide a light-weight, offline option for work to be done by clients until service is restored. The end result can be an unplanned loss of productivity if the cloud service remains unavailable for an extended time frame.

What are the Cloud Computing Service Models?

The three primary cloud computing services models deployed today are: platform as a service (PaaS), infrastructure as a service (IaaS), and software as a service (SaaS). More recently communication as a service (CaaS) and network as a service (NaaS) were upgraded by the International Telecommunication Union to be part of the core cloud computing models.

Infrastructure as a service (IaaS)
IaaS is considered to be the most fundamental cloud computing service model. Under IaaS, virtual machines (ie computers) are run by the cloud with access provided to the subscribing organization. There are typical a pool of hypervisors that run the virtual machines which include firewalls, virtual LANS (VLANs), and software bundles. Depending on the provider, end-users can even install additional software on the virtual machine(s) depending on their needs. Under this model, the monthly or yearly cost of the service depends on the software package deployed on the virtual machines, bandwidth, and storage space provided on the server. Some of the examples of IaaS providers include: the Google Compute Engine, Amazon CloudFormation, HP Cloud, and Windows Azure Virtual Machines.

Platform as a Service (PaaS)
The platform as a service (PaaS) model allows service providers to providing a computing platform that includes a specific operating system (OS), web server, database, and programming language executing environment. PaaS builds on IaaS by allowing application developers to create, run, and test software on a cloud platform without having to purchase the required software and hardware to create a development environment at the office. The more advanced PaaS offerings will even scale resources to meet application demand to minimize costs to the development team. Some of the examples of major PaaS providers in the market today include: Google App Engine, Windows Azure Compute, and Amazon Elastic Beanstalk.

Software as a Service (SaaS)
The software as a service (SaaS) model is becoming the most commonly encountered cloud computing service for the average computer user. Under this model, the service provides application software located in the cloud and end-users access the software via standard web browsers or light-weight client-side applications. In most cases, there is no need for the installation of any software on the end-user’s computer and the software can typically be accessed from any operating system (OS). A cloud software application will distribute work over a large number of virtual machines to allow it to scale across a large number of end-users and organizations. Most SaaS services are either free (for individual use) or charge a monthly or annual fee for use per user. This allows organizations to scale deployment of software to only those who have a need to use it. Some of the commonly encountered SaaS providers are: Microsoft Office 365 and Google Drive (formerly Google Docs).

Network as a Service (NaaS)
Network as a service is the category of services in the cloud where the organization of networking related resources is unified (or located) in the cloud. These typically include extended virtual private networks (VPNs), scalable bandwidth, and other networking related services. Under this model, the management of the virtual network service is left to the service provider allowing the subscribing person or organization to save time and potentially money over managing the resources in-house.

Communication as a Service (CaaS)
Communication as a service (CaaS) allows organizations to push tasks such as call center automation, desktop call management, faxing, messaging, multimedia routing, and screen-pop integration to the cloud. Organizations can select various deployment models for those in the company or enterprise to manage communications infrastructure throughout the organization to cut costs where appropriate. Some CaaS providers now provide a “Pay-As-You-Go” pricing model which is attractive to smaller businesses who do not have a persistent need for all services required by larger organizations.

Why is Cloud Computing Important?

Cloud computing allows both individual consumers and organizations of all sizes a fundamentally different model of information technology operation than has been available in the modern computing age. Taking advantage of the maturity of web applications, high-speed Internet availability, and computer hardware advances, the cost of doing business using cloud services can be significantly less than using traditional software and service deployment models. By relying on cloud providers who specialize in their particular service, the need for in-house IT departments and operating budgets can be reduced allowing for potentially greater productivity and growth for organizations who make the shift to the cloud. As cloud technology continues to mature, the cost of purchasing client-side computing devices will likely continue to decrease with the cost of client-side software reducing with the overall movement to relying on cloud computing resources.

Monday, 11 May 2020

HBA (Host Bus Adapter)


A HBA (host bus adapter) is an integrated circuit adapter or circuit board that is designed to provide physical connectivity between a computer host and storage devices or a network. In addition to the physical connection, an HBA also provides input/output (I/O) processing and alleviates the host computer microprocessor from having to conduct data retrieval and storage tasks. As a result, the device improves computer processor performance. An HBA and the various subcomponents associated with the device(s) are sometimes referred to as disk channels.


What do Host Bus Adapters Do?
The primary purpose of a host bus adapter (HBA) is to connect a computer host to storage and network devices. Other terms that are used to refer to an HBA are host controllers or host adapters with the name typically tied to the devices being connected as well as the I/O functionality provided by the adapter. Some of the most common components the computer hardware devices are used to connect are eSATA, Fiber Channel, and SCSI devices. Additionally, the computer hardware component that connects IDE, FireWire, USB, Ethernet, and other sub-systems are commonly labeled as HBAs or host adapters. By taking care of the communications to and from these devices, the adapter is able to free resources on the computer’s microprocessor and improve overall computing performance. When an adapter is used to connect a computer to a network, the labels of converged network adapter or NIC card (network interface controller) are used for the adapter depending on the type of physical connection between the device(s) and the computer.


What is a SCSI Host Bus Adapter?
SCSI host adapters are used to connect the computer to SCSI devices and allow the computing device to boot from the SCSI hardware. The adapter also provides the ability for the end user to configure the device through the use of a device driver that is linked to the operating system of the computer. In parallel SCSI subsystems, every device assigned to the adapter will have unique IDs assigned. The host adapter itself will be assigned the SCSI ID 7 which will rank the adapter as having the highest priority on the SCSI bus. As the SCSI ID numerically descends, the priority of the component will also decrease. 16 bit buses are slightly different in that ID 8 has the lowest priority vice ID 1 to ensure they maintain backwards compatibility with a SCSI narrow or 8 bit bus. Modern computers are able to have more than one host bus adapter installed which significantly increases the total number of SCSI devices that can be made available to the end user. In all cases, the SCSI host adapter will normally assume the role of SCSI initiator and provide commands to the other devices connected to the card.


Who are the Major SCSI Adapter Manufacturers?
Some of the primary SCSI host bus adapter manufacturers on the market today are ATTO Technology, HP, Adaptec, LSI, Adaptec, ATTO, and Promise Technology. ATTO, Adaptec, and LSI offer specialized PCle SCSI host adapters that are able to be installed on Intel PCs, Apple Mac computers, and other lower profile motherboards that do have SCSI support based on the computer already having SATA or SAS connectivity installed on the computer.

Major SCSI adapter manufacturers are HP, ATTO Technology, Promise Technology, Adaptec, and LSI Corporation. LSI, Adaptec, and ATTO offer PCIe SCSI adapters which fit in Apple Mac, on Intel PCs, and low-profile motherboards which lack SCSI support due to the inclusion of SAS and/or SATA connectivity.


What Does a Host Controller Interface Do?
An HCI (host controller interface) is designed to function as an interface at the register-level of a computer. It allows a host controller for either FireWire or USB hardware to establish communications with the installed host controller driver located in installed software on the computer. On operating systems sold today, the driver software will typically be embedded within the OS; however, it may also be installed through the traditional application installation process when installing a new interface or microcontroller.


Host Bus Adapters and FireWire
An Open Host Controller Interface (OHCI) is commonly used to support a FireWire card on a computer. When the FireWire card supports OHCI, that means that the card is designed to support standard interfaces with the computer and the card will be able to be accessed by the installed OHCI FireWire drivers found on all major commercial operating systems sold today. This lets the computer acknowledge and access the FireWire cable and connected device without having to install software drivers that are specific to the FireWire card.


OHCI Standard for USB
OHCI works similarly for USB cards as it does for FireWire. Unfortunately, the standard only supports the USB 1.1 standard at the time of this writing (bot full and lower speeds of the standard). As a result, the register interface for the standard is significant different. There is additional programming logic located in the controller for USB which makes the access of the USB card more efficient. If there is not an Intel or VIA chipset installed on a computer and it includes USB 1.1 support, the device will likely employ OHCI for accessing the USB card and information.


What is the Universal Host Controller Interface?
UHCI (Universal Host Controller Interface) was created by the Intel Corp. for USB 1.0 supporting both low and full speeds over the connection. Since the majority of computers sold on the market today include either Via or Intel computer chip sets, UHCI is found on more computers than OHCI for USB is. As a result, a number of USB venders are shifting to only providing UHCI functionality as a cost savings measure.


What is EHCI (Enhanced Host Controller Interface)?
EHCI (Enhanced Host Controller Interface) is the higher-speed controller standard that was designed for USB 2.0. Based on the lessons learned from having two competing standards with proprietary information included in one,  industry insisted on having a single, open standard for USB 2.0 adapters. This allows manufacturers to reduce the overall complexity and cost for the hardware allowing companies to either decrease prices, increase profits, or both. To help prevent including proprietary information or features within the standard, the Intel Corporation hosted the EHCI conformance testing efforts.

Prior to development of EHCI, a Windows computer would have two controllers for high-speed ports. One controller would be assigned to handle high speed devices while the other would handle low and full-speed ones. The OHCI driver would cover the low and full speed functionality for USB ports of PCI expansion cards that had NEC controller chipsets (not in all cases). The UHCI driver would then provide the low and full speed functionality making use of an Intel controller chipset located on the computer’s motherboard. With EHCI, the need for multiple interfaces is eliminated. More recently, all of the ports on a computer will be routed through a RMH (rematching hub) and EHCI is used to indirectly provide low and full speed functionality.


What is XHCI (Extensible Host Controller Interface)?
The most recent host bus controller standard to be published is the XHCI (extensible host controller interface). The latest standard provides a significant speed improvement, increased power efficiency, and better virtualization when compared to previous interfaces. XHCI is designed to replace EHCI, OHCI, and UHCI while also supporting all defined USB device speeds (USB 1.1. low and full speed, USB 2.0 low, full, and high speed, and USB 3.0).


Fibre Channel Interface Cards
A fibre channel interface card is also referred to as a host bus adapter. These devices are used for a variety of systems to include computer architectures, buses (both PCI and the now obsolete SBus), and open systems. For this type of HBA, there will be a unique WWN (World Wide Name) assigned to the card that is similar to the numbering functionality provided by an Ethernet MAC addresses in that the card makes use of an OUI that is assigned by the IEEE. The WWN assigned to a fibre channel interface card; however, is longer than a MAC address and each HBA includes two of these identifiers. The first is a nodal WWN (or WWNN), that is shared by all of the ports located on the adapter. The second is the WPPN (or port WWN) that is unique to each port on the device. At the time of this writing the speeds available on HBAs included: 20Gbit/second, 10Gbit/second, 16Gbit/second, 8Gbit/second, 4GBit/second, 2GBit/second, and 1GBit/second.

The predominant Fibre Channel HBA manufacturing companies at the time of this writing are Emulex and QLogic. Despite these vendors have a significant market share, other companies which produce this type of HBA include ATTO, Agilent, LSI Corporation, and Brocade.


How Does a Fibre Channel Interface Card Work?
Fibre Channels (FCs) are high speed networking technology that is used to connect a computer to data storage equipment. The channels typically have transfer rates that approach two, four, eight, and 16 gigabit per second rates. The T11 Technical Committee of the International Committee for Information Technology Standards (INCITS) is responsible under the ANSI (American National Standards Institute) for publishing the FC standard and developing succeeding versions.
Originally, the fibre channel was only used in supercomputers which had a need to have access to extremely large data stores at high speeds. Today; however, fibre channels have become a common connection type for enterprise storage services that are running storage area networks (SANs). Although the standard uses the name “fiber”, it is now capable of running on electrical interfaces in addition to fiber cables.


History of Fibre Channel Host Bus Adapters
Development of the fibre channel standard started in 1988 and later received approval from ANSI in 1994 and became an international standard. It was created in order to simplify the HIPPI system then in use for providing higher speed access to information. The HIPPI standard made use of 50 pair cable which had large connectors and also limits on the overall cable length. Once the fibre channel standard started to make inroads on the mass storage market, the primary competitor to industry making the shift to FC was the proprietary IBM SSA (Serial Storage Architecture) interface. The market ultimately went with fibre channel technology which was mostly focused on increasing distances of cabling that is supported and on decreasing complexity of connections.

As the fibre standard and implementations by manufacturer’s matured, additional goals were added to include providing higher transfer speeds, increased capacity for total number of connected devices, and being able to connect to SCSI disk storage. Support for a variety of protocols was also added to the standard to include FICON, SCSI, IP, and ATM.

When fiber channel host bus adapters were first deployed, they only supported the user of fiber cabling. Despite the fact that copper cable support was added early in the life of the FC standard, the development committee decided to keep the original name, but actually uses the British spelling of fibre for the standard with the American spelling being reserved for the actual cabling.


What are the Fibre Channel Topologies?
There are three major Fibre Channel topologies defined in the standard. These topologies define how a number of ports are connected together. A port in the FC standard is considered to be any “thing” or “entity” which is capable of actively communicating over the network. This entity does not have to be a hardware port. It can be implemented in other devices such as an HBA located on a server, a FC switch, or disk storage.

Point-to-Point
The Point-to-point (FC-P2P) topology is defined as two devices that are directly connected to each other. This is the most basic FC topology and has limited connectivity when compared to the other two topologies included in the standard.

Arbitrated Loop
The arbitrated loop (FC-AL) topology has all of the connected devices located in a ring or a loop. The concept is very similar to a token ring network. When a device is removed from the loop, all of the communication activity on the loop will be interrupted until restored. A significant disadvantage to this topology is that if one of the devices fails, a break will occur in the ring. Fibre channel hubs are typically deployed to help connect multiple devices as a result which allow failed ports to be bypassed. Only one set or pair of ports are able to communicate at the same time on a loop.

Switched Fabric
The switched fabric (FC-SW) topology has all of the connected devices (or loops of devices) connected to a FC switch. This concept is similar to Ethernet networking implementations. Advantages of the switched fabric topology include the ability to manage the overall state of the topology and optimize the interconnections of the network, communications between two ports only travel through the switches and are not sent to other device ports, multiple pairs of ports are able to communicate at the same time, and the failure of a port is isolated and will not impact the ability of other ports to operate.

Tuesday, 5 May 2020

Understanding DNS Zones(DNS Zones Overview)


A DNS zone is the contiguous portion of the DNS domain name space over which a DNS server has authority. A zone is a portion of a namespace. It is not a domain. A domain is a branch of the DNS namespace. A DNS zone can contain one or more contiguous domains. A DNS server can be authoritative for multiple DNS zones. A non-contiguous namespace cannot be a DNS zone.
A zone contains the resource records for all of the names within the particular zone. Zone files are used if DNS data is not integrated with Active Directory. The zone files contain the DNS database resource records that define the zone. If DNS and Active Directory are integrated, then DNS data is stored in Active Directory.
The different types of zones used in Windows Server 2003 DNS are listed below:
  • Primary zone
  • Secondary zone
  • Active Directory-integrated zone
  • Reverse lookup zone
  • Stub zoneUnderstanding DNS Zones
A primary zone is the only zone type that can be edited or updated because the data in the zone is the original source of the data for all domains in the zone. Updates made to the primary zone are made by the DNS server that is authoritative for the specific primary zone. Users can also back up data from a primary zone to a secondary zone.
A secondary zone is a read-only copy of the zone that was copied from the master server during zone transfer. In fact, a secondary zone can only be updated through zone transfer.
An Active Directory-integrated zone is a zone that stores its data in Active Directory. DNS zone files are not needed. This type of zone is an authoritative primary zone. An Active Directory-integrated zone’s zone data is
replicated during the Active Directory replication process. Active Directory-integrated zones also enjoy the Active Directory’s security features.
A reverse lookup zone is an authoritative DNS zone. These zones mainly resolve IP addresses to resource names on the network. A reverse lookup zone can be either of the following zones:
  • Primary zone
  • Secondary zone
  • Active Directory-integrated zone
A stub zone is a new Windows Server 2003 feature. Stub zones only contain those resource records necessary to identify the authoritative DNS servers for the master zone. Stub zones therefore contain only a copy of a zone, and are used to resolve recursive and iterative queries:
  • Iterative queries: The DNS server provides the best answer it can. This can be:
    • The resolved name
    • A referral to a different DNS server
  • Recursive queries: The DNS server has to reply with the requested information or with an error. The DNS server cannot provide a referral to a different DNS server.
Stub zones contain the following information:
  • Start of Authority (SOA) resource records of the zone
  • Resource records that list the authoritative DNS servers of the zone
  • Glue address (A) resource records that are necessary for contacting the authoritative servers of the zone.
Zone delegation occurs when users assign authority over portions of the DNS namespace to subdomains of the DNS namespace. Users should delegate a zone under the following circumstances:
  • To delegate administration of a DNS domain to a department or branch of the organization.
  • To improve performance and fault tolerance of the DNS environment. Users can distribute DNS database management and maintenance between several DNS servers.

Understanding DNS Zone Transfer

A zone transfer can be defined as the process that occurs to copy the zone’s resource records on the primary DNS server to secondary DNS servers. Zone transfer enables a secondary DNS server to continue handling queries if the primary DNS server fails. A secondary DNS server can also transfer its zone data to other secondary DNS servers that are beneath it in the DNS hierarchy. In this case, the secondary DNS server is regarded as the master DNS server to the other secondary servers.
The zone transfer methods are:
  • Full transfer: When the user configures a secondary DNS server for a zone and starts the secondary DNS server, the secondary DNS server requests a full copy of the zone from the primary DNS server. A full transfer of all the zone information is performed. Full zone transfers tend to be resource intensive. This disadvantage of full transfers has led to the development of incremental zone transfers.
  • Incremental zone transfer: With an incremental zone transfer, only those resource records that have since changed in a zone are transferred to the secondary DNS servers. During zone transfer, the DNS database is on the primary.
    DNS server and the secondary DNS server are compared to determine whether there are differences in the DNS data. If the primary and secondary DNS servers’ data are the same, zone transfer does not take place. If the DNS data of the two servers are different, transfer of the delta resource records starts. This occurs when the serial number on the primary DNS server database is higher than that of secondary DNS server. For incremental zone transfer to occur, the primary DNS server has to record incremental changes to its DNS database. Incremental zone transfers require less bandwidth than full zone transfers.
  • Active Directory transfers: These zone transfers occur when Active Directory-integrated zones are replicated to the domain controllers in a domain. Replication occurs through Active Directory replication.
  • DNS Notify is a mechanism that enables a primary DNS server to inform secondary DNS servers when its database has been updated. DNS Notify informs the secondary DNS servers when they need to initiate a zone transfer so that the updates of the primary DNS server can be replicated to them. When a secondary DNS server receives the notification from the primary DNS server, it can start an incremental zone transfer or a full zone transfer to pull zone changes from the primary DNS servers.

Understanding DNS Resource Records (RRs)

The DNS database contains resource records (entries) that resolve name resolution queries sent to the DNS server. Each DNS server contains the resource records (RRs) it needs to respond to name resolution queries for the portion of the DNS namespace for which it is authoritative. There are different types of resource records.
A few of the commonly used resource records (RR) and their associated functions are described in the Table.
Resource Records Type Name Function
A Host record Contains the IP address of a specific host, and maps the FQDN to this 32-bit IPv4
addresses.
AAAA IPv6 address record Ties a FQDN to an IPv6 128-bit address.
AFSDB Andrews files system Associates a DNS domain name to a server subtype: an AFS version 3
volume or an authenticated name server using DCE/NCA
ATMA Asynchronous Transfer Mode address Associates a DNS domain name to the ATM address of the
atm_address field.
CNAME Canonical Name / Alias name Ties an alias to its associated domain name.
HINFO Host info record Indicates the CPU and OS type for a particular host.
ISDN ISDN info record Ties a FQDN to an associated ISDN telephone number
KEY Public key resource record Contains the public key for zones that can use DNS Security
Extensions (DNSSEC).
MB Mailbox name record Maps the domain mail server name to the mail server.s host
name
MG Mail group record Ties th domain mailing group to mailbox resource records
MINFO Mailbox info record Associates a mailbox for an individual that maintains it.
MR Mailbox renamed record Maps an older mailbox name to its new mailbox name.
MX Mail exchange record Provides routing for messages to mail servers and backup
servers.
NS Name server record Provides a list of the authoritative servers for a domain. Also provides
the authoritative DNS server for delegated subdomains.
NXT Next resource record Indicates those resource record types that exist for a name. Specifies
the resource record in the zone.
OPT Option resource record A pseudo-resource record which provides extended DNS
functionality.
PTR Pointer resource record Points to a different resource record, and is used for reverse
lookups to point to A type resource records.
RT Route through record Provides routing information for hosts that do not have a WAN
address.
SIG Signature resource record Stores the digital signature for an RR set.
SOA Start of Authority resource record This resource record contains zone information for
determining the name of the primary DNS server for the zone. The SOA record stores other zone property information,
such as version information.
SRV Service locator record Used by Active directory to locate domain controllers, LDAP servers,
and global catalog servers.
TXT Text record Maps a DNS name to descriptive text.
X25 X.25 info record Maps a DNS address to the public switched data network (PSDN) address
number.
While there are various resource records that contain different information, there are a few required fields that each particular resource record has to contain:
  • Owner – the DNS domain that contains the resource record
  • TTL (Time to Live) – indicates the time duration that DNS servers can cache resource record information prior to discarding the information. This is, however, an optional resource records field.
  • Class – is another optional resource records field. Class types were used in earlier implementations of the DNS naming system and are no longer used these days.
  • Type – indicates the type of information contained in the resource record.
  • Record Specific Data – a variable length field that further defines the function of the resource. The format of the field is determined by Class and Type.
Delegation records and glue records can also be added to a zone. These records delegate a subdomain into a separate zone.
  • Delegation records: These are Name Space (NS) resource records in a parent zone. The delegation record specifies the parent zone as being authoritative for the delegated zones.
  • Glue records: These are A type resource records for the DNS server that has authority over delegated zone.
The more important resource records are discussed now. This includes the following:
  • Start of Authority (SOA), Name Server (NS), Host (A), Alias (CNAME), Mail exchanger (MX), Pointer (PTR), Service location (SRV)

Start of Authority (SOA) Resource Record

This is the first record in the DNS database file. The SOA record includes information on the zone property information, such the primary DNS server for the zone and version information.
The fields located within the SOA record are listed below:
  • Source host – the host for which the DNS database file is maintained
  • Contact e-mail – e-mail address for the individual who is responsible for the database file.
  • Serial number – the version number of the database.
  • Refresh time – the time that a secondary DNS server waits while determining whether database updates have been made that have to be replicated via zone transfer.
  • Retry time – the time for which a secondary DNS server waits before attempting a failed zone transfer again.
  • Expiration time – the time for which a secondary DNS server will continue to attempt to download zone information. Old zone information is discarded when this limit is reached.
  • Time to live – the time that the particular DNS server can cache resource records from the DNS database file.

Name Server (NS) Resource Record

The Name Server (NS) resource record provides a list of the authoritative DNS servers for a domain as well authoritative DNS server for any delegated subdomains. Each zone must have one (or more) NS resource records at the zone root. The NS resource record indicates the primary and secondary DNS servers for the zone defined in the SOA resource record. This in turn enables other DNS servers to look up names in the domain.

Host (A) Resource Record

The host (A) resource record contains the IP address of a specific host and maps the FQDN to this 32-bit IPv4 addresses. Host (A) resource records basically associates the domain names of computers (FQDNs) or hosts names to their associated IP addresses. Because a host (A) resource record statically associates a host name to a specific IP address, users can manually add these records to zones if they have machines that have statically assigned IP addresses.
The methods used to add host (A) resource records to zones are:
  • Manually add these records using the DNS management console.
  • Use the Dnscmd tool at the command line to add host (A) resource records.
  • TCP/IP client computers running Windows 2000, Windows XP, or Windows Server 2003 use the DHCP Client service to both register their names and update their host (A) resource records.

Alias (CNAME) Resource Record

Alias (CNAME) resource records tie an alias name to its associated domain name. Alias (CNAME) resource records are referred to as canonical names. By using canonical names, users can hide network information from the clients connected to their network. Alias (CNAME) resource records should be used when users have to rename a host that is defined in a host (A) resource record in the identical zone.

Mail Exchanger (MX) Resource Record

The mail exchanger (MX) resource record provides routing for messages to mail servers and backup servers. The mail MX resource record provides information on which mail servers processes e-mail for the particular domain name. E-mail applications therefore mostly utilize MX resource records.
A mail exchanger (MX) resource record has the following parameters:
  • Priority
  • Mail server
The mail exchanger (MX) resource record enables the DNS server to work with e-mail addresses where no specific mail server is defined. A DNS domain can have multiple MX records. MX resource records can therefore also be used to provide failover to different mail servers when the primary server specified is unavailable. In this case, a server preference value is added to indicate the priority of a server in the list. Lower server preference values specify higher preference.

Pointer (PTR) Resource Record

The pointer (PTR) resource record points to a different resource record and is used for reverse lookups to point to A resource records. Reverse lookups resolve IP addresses to host names or FQDNs.
Add PTR resource records to zones through the following methods:
  • Manually add these records with the DNS management console.
  • Use the Dnscmd tool at the command line to add PTR resource records.

Service (SRV) Resource Records

Service (SRV) resource records are typically used by Active directory to locate domain controllers, LDAP servers, and global catalog servers. The SRV records define the location of specific services in a domain. They associate the location of a service such as a domain controller or global catalog server with details on how the particular service can be contacted.
The fields of the service (SRV) resource record are explained below:
  • Service name
  • The protocol used
  • The domain name associated with the SRV records
  • The port number for the particular service
  • The Time to Live value
  • The class
  • The priority and weight
  • The target specifying the FQDN of the particular host supporting the service

The Zone Database Files

If the user is not using Active Directory-integrated zones, the specific zone database files that are used for zone data are:
  • Domain Name file: When new A type resource records are added to the domain, they are stored in this file. When a zone is created, the Domain Name file contains the following:
    • An SOA resource record for the domain
    • An NS resource record that indicates the name of the DNS server that was created.
  • Reverse Lookup file: This database file contains information on a reverse lookup zone.
  • Cache file: This file contains a listing of the names and addresses of root name servers that are needed for resolving names that are external to the authoritative domains.
  • Boot file: This file controls the DNS server’s startup behavior. The boot file supports the commands listed below:
    • Directory command – this command defines the location of the other files specified in the Boot file.
    • Primary command – defines the domain for which this particular DNS server has authority.
    • Secondary – specifies a domain as being a secondary domain.
    • Cache command – this command defines the list of root hints used for contacting DNS servers for the root domain.

Planning DNS Zone Implementations

When users divide up the DNS namespace, DNS zones are created. Breaking up the namespace into zones enables DNS to more efficiently manage available bandwidth usage, which in turn improves DNS performance.
When determining how to break up the DNS zones, a few considerations to take include:
  • DNS traffic patterns: use the System Monitor tool to examine DNS performance counters and to obtain DNS server statistics.
  • Network link speed: The types of network links that exist between DNS servers should be determined when users plan the zones for their environment.
  • Whether full DNS servers or caching-only DNS servers are being used also affects how users break up DNS zones.
The main zone types used in Windows Server 2003 DNS environments are primary zones and Active Directory-integrated zones. The question on whether to implement primary zones or Active Directory-integrated zones would be determined by the environment’s DNS design requirements.
Both primary zones and secondary zones are standard DNS zones that use zone files. The main difference between primary zones and secondary zones is that primary zones can be updated. Secondary zones contain read-only copies of zone data. A secondary DNS zone can only be updated through DNS zone transfer. Secondary DNS zones are usually implemented to provide fault tolerance for the DNS server environment.
An Active Directory-integrated zone can be defined as an improved version of a primary DNS zone because it can use multi-master replication and the security features of Active Directory. The zone data of Active Directory-integrated zones are stored in Active Directory. Active Directory-integrated zones are authoritative primary zones.
A few advantages that Active Directory-integrated zone implementations have over standard primary zone implementations are:
  • Active Directory replication is faster, which means that the time needed to transfer zone data between zones is far less.
  • The Active Directory replication topology is used for Active Directory replication and for Active Directory-integrated zone replication. There is no longer a need for DNS replication when DNS and Active Directory are integrated.
  • Active Directory-integrated zones can enjoy the security features of Active Directory.
  • The need to manage Active Directory domains and DNS namespaces as separate entities is eliminated. This in turn reduces administrative overhead.
  • When DNS and Active Directory are integrated, the Active Directory-integrated zones are replicated and stored on any new domain controllers automatically. Synchronization takes place automatically when new domain controllers are deployed.
The mechanism that DNS utilizes to forward a query that one DNS server cannot resolve to another DNS server is called DNS forwarding. DNS forwarders are the DNS servers used to forward DNS queries for different DNS namespace to those DNS servers who can answer the query. A DNS server is configured as a DNS forwarder when users configure the other DNS servers to direct any unresolved queries to a specific DNS server. Creating DNS forwarders can improve name resolution efficiency.
Windows Server 2003 DNS introduces a new feature called conditional forwarding. With conditional forwarding, users create conditional forwarders within their environment that will forward DNS queries based on the specific domain names being requested in the query. This differs from DNS forwarders where the standard DNS resolution path to the root was used to resolve the query. A conditional forwarder can only forward queries for domains that are defined in the particular conditional forwarders list. The query is passed to the default DNS forwarder if there are no entries in the forwarders list for the specific domain queried.
When conditional forwarders are configured, the process to resolve domain names is illustrated below:
  1. A client sends a query to the DNS server for name resolution.
  2. The DNS server checks its DNS database file to determine whether it can resolve the query with its zone data.
  3. The DNS server also checks its DNS server cache to resolve the request.
  4. If the DNS server is not configured to use forwarding, the server uses recursion to attempt to resolve the query.
  5. If the DNS server is configured to forward the query for a specific domain name to a DNS forwarder, the DNS server then forwards the query to the IP address of its configured DNS forwarder.
A few considerations for configuring forwarders for the DNS environment are:
  • Only implement the DNS forwarders that are necessary for the environment. Refrain from creating loads of forwarders for the internal DNS servers.
  • Avoid chaining your DNS servers together in a forwarding configuration.
  • To avoid the DNS forwarder turning into a bottleneck, do not configure one external DNS forwarder for all the internal DNS servers.

How to Create a New Zone

  1. Click Start, Administrative Tools, and DNS to open the DNS console.
  2. Expand the Forward Lookup Zones folder.
  3. Select the Forward Lookup Zones folder.
  4. From the Action menu, select New Zone.
  5. The New Zone Wizard initiates.
  6. On the initial page of the Wizard, click Next.
  7. On the Zone Type page, ensure that the Primary Zone Creates A Copy Of A Zone That Can Be Updated Directly On This Server option is selected. This option is selected by default.
  8. Uncheck the Store The Zone In Active Directory (Available Only If DNS Server Is A Domain Controller) checkbox.
    Click Next.
  9. On the Zone Name page, enter the correct name for the zone in the Zone Name textbox. Click Next.
  10. On the Zone File page, ensure that the default option, Create A New File With This File Name, is selected. Click Next.
  11. On the Dynamic Update page, ensure that the Do Not Allow Dynamic Updates. Dynamic Updates Of Resource Records Are Not Accepted By This Zone. You Must Update These Records Manually option is selected. Click Next.
  12. The Completing The New Zone Wizard page is displayed next.
  13. Click Finish to create the new zone.

How to Create Subdomains

  1. Click Start, Administrative Tools, and DNS to open the DNS console.
  2. In the console tree, select the appropriate zone.
  3. From the Action menu, select New Domain.
  4. The DNS Domain dialog box opens.
  5. Enter the name for new subdomain.
  6. Click OK to create the new subdomain.

How to create a reverse lookup zone

  1. Click Start, Administrative Tools, and DNS to open the DNS console.
  2. Select the appropriate DNS server in the console tree.
  3. Right-click the DNS server then select New Zone from the shortcut menu.
  4. The New Zone Wizard starts.
  5. Click Next on the first page of the New Zone Wizard.
  6. On the Zone Type page, ensure that the Primary Zone option is selected. Click Next.
  7. On the following page, select the Reverse lookup zone option. Click Next.
  8. Enter the IP network in the Network ID box for the domain name that the new reverse lookup zone is being created for. Click Next.
  9. Accept the default zone file name. Click Next.
  10. On the Dynamic Update page, select the Allow both nonsecure and secure dynamic updates option, then click Next.
  11. The Completing The New Zone Wizard page is displayed next.
  12. Click Finish to create the new reverse lookup zone.

How to Create a Stub Zone

  1. Click Start, Administrative Tools, and then click DNS to open the DNS console.
  2. Expand the Forward Lookup Zones folder.
  3. Select the Forward Lookup Zones folder.
  4. From the Action menu, select New Zone.
  5. The New Zone Wizard initiates.
  6. On the initial page of the Wizard, click Next.
  7. On the Zone Type page, select the Stub Zone option.
  8. Uncheck the Store The Zone In Active Directory (Available Only If DNS Server Is A Domain Controller) checkbox. Click Next.
  9. On the Zone Name page, enter the name for the new stub zone in the Zone Name textbox then click Next.
  10. Accept the default setting on the Zone file page. Click Next.
  11. On the Master DNS Servers page, enter the IP address of the master server in the Address text box. Click Next.
  12. On the Completing The New Zone Wizard page, click Finish.

How to Add Resource Records to Zones

  1. Click Start, Administrative Tools, and DNS to open the DNS console.
  2. In the console tree, select the zone to add resource records to.
  3. From the Action menu, select the resource record type to be added to the zone. The options are:
    • New Host (A)
    • New Alias (CNAME)
    • New Mail Exchanger (MX)
    • Other New Records
  4. Select the New Host (A) option.
  5. The New Host dialog box opens.
  6. In the Name (Use Parent Domain Name If Blank) textbox, enter the name of the new host.
  7. When the user specifies the name of the new host, the resulting FQDN is displayed in the Fully qualified domain name (FQDN) textbox.
  8. In the IP Address box, enter the address for the new host.
  9. To create an associated pointer (PTR) record, enable the checkbox.
  10. Click the Add Host button.
  11. The new host (A) resource record is added to the particular zone.
  12. A message box is displayed, verifying that the new host (A) resource record was successfully created for the zone.
  13. Click OK.
  14. Click Done to close the New Host dialog box./li>

How to Create a Zone Delegation

  1. Click Start, Administrative Tools, and select DNS to open the DNS console.
  2. Right-click the subdomain in the console tree, then select New Delegation from the shortcut menu.
  3. The New Delegation Wizard initiates.
  4. Click Next on the first page of the New Delegation Wizard.
  5. When the Delegated Domain Name page opens, provide a delegated domain name then click Next.
  6. On the Name Servers page, click the Add button to provide the DNS servers’ names and IP addresses that should host the delegation.
  7. On the Name Servers page, click Next.
  8. Click Finish.

How to Enable Dynamic Updates for a Zone

  1. Click Start, Administrative Tools, and the select DNS to open the DNS console.
  2. Right-click the zone to work with in the console tree, then select Properties from the shortcut menu.
  3. When the Zone Properties dialog box opens, on the General tab, select Yes in the Allow Dynamic Updates list box.
  4. Click OK.

How to Configure a Zone to Use WINS for Name Resolution

Users can configure their forward lookup zone to use WINS for name resolution in instances where the queried name is not found in the DNS namespace.
  1. Click Start, Administrative Tools, and DNS to open the DNS console.
  2. In the console tree, expand the DNS server node then expand the Forward Lookup Zones folder.
  3. Locate and right-click the zone to be configured, then select Properties from the shortcut menu.
  4. When the Zone Properties dialog box opens, click the WINS tab.
  5. Enable the Use WINS Forward Lookup checkbox.
  6. Type the WINS server IP address. Click Add, then OK.
  7. On the General tab, select Yes in the Allow Dynamic Updates list box.
  8. Click OK.

Featured post

Top 10 Rare Windows Shortcuts to Supercharge Productivity

  Windows Key + X, U, U: This sequence quickly shuts down your computer. It's a great way to initiate a safe and swift shutdown without...