Internet Protocol (IP) is shared between multiple servers. This process helps in optimal resource utilization, maximizing the amount of data passing reliably through a system from input to output, and minimizing the response time.
Load balancing enhances the performance of server and thus used in busy networks, which feels difficult to satisfy the entire request issued to the server. Load balancing is nothing but equilibrating the processes that requests for server to execute. The processes are managed by dedicated program or hardware devices. In web services, there are two web servers engaged in load balancing scheme. If one of the web servers gets overloaded, the alternative web server shares the work and executes the requested processes. Load balancing is done with the service time in order to handle multiple requests without causing traffic. That is each process is given a specific time in the server for its execution and the process cannot stay no more in the server once the service time is exceeded. The service time is reduced when Load balancer works actively.
For the internet service the load balancer is a software program that receives the request and returns it through ports connected to it. Requests from the client are forwarded to the backend web server where the process is shared to the multiple servers and worked. The client receives apposite solution without knowing about internal detachment of the process. It also prevents from client connecting to the server directly that ensures security of data. Load balancer works dynamic even if the servers are not available. They store the processes within the loader and execute them later when the server is available.
There are alternative methods available for load balancing that does not require a software or hardware node. In this approach, the client is allowed to choose their respective server and get service from them. This technique is highly transparent to the client because it discloses the existence of multiple servers at the backend. This straightforward method is called the round robin DNS that works depending on the degree of control over the DNS server. Here multiple IP addresses are associated with the single domain. The processes are scheduled for execution in the server using varieties of scheduling algorithms. Round Robin is one of the scheduling algorithms used in Load balancing to sophisticate easy finding of backend server to send a request to.
Assigning process to the particular server is based on IP address, or random assignment. Channelizing IP address between client and server is sometimes unreliable, because there is a chance where IP address may change during the transaction. During the random assignment, the balance loader faces the problem of storage burden. Sometime data gets overwritten, thus leads to missing of original information. But fortunately, there are feasible load balancing approaches available to have fastest access to any server.
Selasa, 11 Agustus 2009
Selasa, 04 Agustus 2009
How to Choose the Best Webhosting Provider
Webhosting providers often come from via a suggestion by your website designer. In fact, many webmasters require that if you use a webhosting company of their choice or their own dedicated in-house server. Many webmasters claim to have major compatibility and access issues if they have to work with a secondary host outside their network.
For instance, if you were to develop a new website using a free or low cost website development system, you would be likely to find your webhosting to be with the company that allowed you to develop the site using their exclusive tools.
If a webhosting service is preferred by your website designer, it may be in your best interest to verify the overall webhosting provider's functionality. Also, look at the perceived reliability of the webhosting service before you give the ok to move forward with that prodiver. You can have a web designer develop a great website, but could drive customers away because of frequent online outages and other reliability problems.
The best case scenario is the combination between web design functionality and accompanying webhosting services. Regardless, you should always check the credentials of the website designer as well as the webhosting providers capabilities for your current and future website needs.
Unfortunately many webhosting providers completely overload their servers to save money and often run into recurring periods of downtime. Does your webhost provide sufficient bandwidth to allow for website growth, or do they make bandwidth easy to upgrade to?
Some websites experience rapid growth, but by having a webhosting provider that is not capable of keeping up with the traffic demands it's a major issue.
Most webhosting providers have system contingencies in place to manage the growth of your website needs. However, don't take these things for granted. Make sure to ask questions so you can make an informed decision about the quality, reliability and effectiveness of the webhosting service you select.
For more articles and reviews on the best Webhosting providers
For instance, if you were to develop a new website using a free or low cost website development system, you would be likely to find your webhosting to be with the company that allowed you to develop the site using their exclusive tools.
If a webhosting service is preferred by your website designer, it may be in your best interest to verify the overall webhosting provider's functionality. Also, look at the perceived reliability of the webhosting service before you give the ok to move forward with that prodiver. You can have a web designer develop a great website, but could drive customers away because of frequent online outages and other reliability problems.
The best case scenario is the combination between web design functionality and accompanying webhosting services. Regardless, you should always check the credentials of the website designer as well as the webhosting providers capabilities for your current and future website needs.
Unfortunately many webhosting providers completely overload their servers to save money and often run into recurring periods of downtime. Does your webhost provide sufficient bandwidth to allow for website growth, or do they make bandwidth easy to upgrade to?
Some websites experience rapid growth, but by having a webhosting provider that is not capable of keeping up with the traffic demands it's a major issue.
Most webhosting providers have system contingencies in place to manage the growth of your website needs. However, don't take these things for granted. Make sure to ask questions so you can make an informed decision about the quality, reliability and effectiveness of the webhosting service you select.
For more articles and reviews on the best Webhosting providers
Senin, 15 Juni 2009
virtual memory
Virtual Memory
If the valid bit is zero in the page table entry for the logical
address, this means that the page is not in memory and
must be fetched from disk.
– This is a page fault.
– If necessary, a page is evicted from memory and is
replaced by the page retrieved from disk, and the valid
bit is set to 1.
If the valid bit is 1, the virtual page number is replaced by
the physical frame number.
The data is then accessed by adding the offset to the
physical frame number.
As an example, suppose a system has a virtual address
space of 8K and a physical address space of 4K, and the
system uses byte addressing.
– We have 213/210 = 23 virtual pages – each page is 1K
A virtual address has 13 bits (8K = 213) with 3 bits for the
page field and 10 for the offset, because the page size is
1024.
A physical memory address requires 12 bits, the first two
bits for the page frame and the trailing 10 bits the offset.
If the valid bit is zero in the page table entry for the logical
address, this means that the page is not in memory and
must be fetched from disk.
– This is a page fault.
– If necessary, a page is evicted from memory and is
replaced by the page retrieved from disk, and the valid
bit is set to 1.
If the valid bit is 1, the virtual page number is replaced by
the physical frame number.
The data is then accessed by adding the offset to the
physical frame number.
As an example, suppose a system has a virtual address
space of 8K and a physical address space of 4K, and the
system uses byte addressing.
– We have 213/210 = 23 virtual pages – each page is 1K
A virtual address has 13 bits (8K = 213) with 3 bits for the
page field and 10 for the offset, because the page size is
1024.
A physical memory address requires 12 bits, the first two
bits for the page frame and the trailing 10 bits the offset.
Jumat, 05 Juni 2009
Virtual Memory
What if a process has an address space larger than Physical
Memory? For instance, you want 2 gigabytes of
instructions/storage to run on a machine with 1 gigabyte of
physical memory?
Main Memory (DRAM)?
Process Address space is 0 to 2n-1 where n = machine
size (I.e. 32 bit) Main memory is temporary storage; not
as big as process address space. Instead your program is
usually stored on some form of permanent storage (disk or
tape).
Why not just make main memory large enough?
• Can’t rely totally on Memory Technology – cost, speed and
capacity factors
Memory Comparison - cost
• Full address space is quite large:
e.g. 32-bit address (with 1 byte storage ): ~4 GB
• Disk storage is ~300X cheaper than DRAM
80 GB of DRAM: ~ $5,000 vs. 200 GB of disk: ~ $70
• To access large amounts of data in a cost-effective
manner, the bulk of the data must be stored on disk
Original Motivation for VM
• IBM wanted one software suite for a family of System 370
computers
• Allowed customers to purchase a smaller system with the
knowledge they could upgrade to larger system
• Allowing same program run on machines with different
memory sizes (Earlier programmers had to do explicit
memory management)
• Idea was to create an illusion for a process that it has
memory as big as its address space. Hence the concept of
Virtual Memory i.e. memory appears to be but isn't.
• A physical address is the actual memory address of
physical memory. We’ve seen this address in our PIC
programming.
• Programs use virtual addresses that are mapped to
physical addresses by the memory manager. (Example
– suppose I have three identical processes running.
How could they each touch a piece of data at location
1000?)
• Page faults occur when a logical address requires that a
page be brought in from disk.
• Memory fragmentation occurs when the paging process
results in the creation of small, unusable clusters of
memory addresses.
• Main memory and virtual memory are divided into equal
sized pages.
• The entire address space required by a process need
not be in memory at once. Some parts can be on disk,
while others are in main memory.
• Further, the pages allocated to a process do not need to
be stored contiguously-- either on disk or in memory.
• In this way, only the needed pages are in memory at
any time, the unnecessary pages are in slower disk
storage.
Memory? For instance, you want 2 gigabytes of
instructions/storage to run on a machine with 1 gigabyte of
physical memory?
Main Memory (DRAM)?
Process Address space is 0 to 2n-1 where n = machine
size (I.e. 32 bit) Main memory is temporary storage; not
as big as process address space. Instead your program is
usually stored on some form of permanent storage (disk or
tape).
Why not just make main memory large enough?
• Can’t rely totally on Memory Technology – cost, speed and
capacity factors
Memory Comparison - cost
• Full address space is quite large:
e.g. 32-bit address (with 1 byte storage ): ~4 GB
• Disk storage is ~300X cheaper than DRAM
80 GB of DRAM: ~ $5,000 vs. 200 GB of disk: ~ $70
• To access large amounts of data in a cost-effective
manner, the bulk of the data must be stored on disk
Original Motivation for VM
• IBM wanted one software suite for a family of System 370
computers
• Allowed customers to purchase a smaller system with the
knowledge they could upgrade to larger system
• Allowing same program run on machines with different
memory sizes (Earlier programmers had to do explicit
memory management)
• Idea was to create an illusion for a process that it has
memory as big as its address space. Hence the concept of
Virtual Memory i.e. memory appears to be but isn't.
• A physical address is the actual memory address of
physical memory. We’ve seen this address in our PIC
programming.
• Programs use virtual addresses that are mapped to
physical addresses by the memory manager. (Example
– suppose I have three identical processes running.
How could they each touch a piece of data at location
1000?)
• Page faults occur when a logical address requires that a
page be brought in from disk.
• Memory fragmentation occurs when the paging process
results in the creation of small, unusable clusters of
memory addresses.
• Main memory and virtual memory are divided into equal
sized pages.
• The entire address space required by a process need
not be in memory at once. Some parts can be on disk,
while others are in main memory.
• Further, the pages allocated to a process do not need to
be stored contiguously-- either on disk or in memory.
• In this way, only the needed pages are in memory at
any time, the unnecessary pages are in slower disk
storage.
Rabu, 03 Juni 2009
The Memory Hierarchy
• Faster memory is more expensive than slower memory.
• For the best performance at the lowest cost, memory is organized in a
hierarchical fashion.
• Small, fast storage elements are kept in the CPU, larger, slower main memory
is accessed through the data bus.
• Larger, (almost) permanent storage in the form of disk and tape drives is still
further from the CPU.
• An entire blocks of data is copied after a hit because the principle of
locality tells us that once a byte is accessed, it is likely that a
nearby data element will be needed soon.
• There are three forms of locality:
– Temporal locality- Recently-accessed data elements tend to be
accessed again.
– Spatial locality - Accesses tend to cluster.
– Sequential locality - Instructions tend to be accessed
sequentially.
• Cache Line -- The number of bytes brought in with this block
Mechanism:
• To access a particular piece of data, the CPU sends a request to its
nearest memory, usually cache.
• If the data is not in cache, then main memory is queried. If the data is
not in main memory, then the request goes to disk.
• Once the data is located, the data,
SDRAM; Random Access Memory
• Short for Synchronous DRAM, a type of DRAM that can run at much
higher clock speeds than conventional memory. SDRAM actually
synchronizes itself with the CPU's bus and is capable of running at 133
MHz,
DDR (Double Data Rate) is a technology used in some SDRAM memories
to increase the speed at which data can be written/retrieved from the
memory.
DDR increase the transfer rate by sending/receiving memory data twice
per clock cycle. This give a theoretical multiplication of transfer speed by
two.
DDR2-SDRAM maintains the same core functions, transferring 64 bits of data twice
every clock cycle for an effective transfer rate twice that of the front-side bus
(FSB) of a computer system, and an effective bandwidth equal to its speed x 8.
Flash Memory
A type of EEPROM.
Non-volatile – doesn’t require power to
hold data.
Data is written in blocks - not byte
accessible. Great for disk-like devices
requiring 4096 bytes to be read/written;
not good for Random Access Memory.
Limited to 1,000,000 cycles – and blocks
can go bad.
The controller can do bad block remapping
and error checking.
The controller can do wear leveling –
moving blocks around so that no one
area on the chip has excessive wear.
• For the best performance at the lowest cost, memory is organized in a
hierarchical fashion.
• Small, fast storage elements are kept in the CPU, larger, slower main memory
is accessed through the data bus.
• Larger, (almost) permanent storage in the form of disk and tape drives is still
further from the CPU.
• An entire blocks of data is copied after a hit because the principle of
locality tells us that once a byte is accessed, it is likely that a
nearby data element will be needed soon.
• There are three forms of locality:
– Temporal locality- Recently-accessed data elements tend to be
accessed again.
– Spatial locality - Accesses tend to cluster.
– Sequential locality - Instructions tend to be accessed
sequentially.
• Cache Line -- The number of bytes brought in with this block
Mechanism:
• To access a particular piece of data, the CPU sends a request to its
nearest memory, usually cache.
• If the data is not in cache, then main memory is queried. If the data is
not in main memory, then the request goes to disk.
• Once the data is located, the data,
SDRAM; Random Access Memory
• Short for Synchronous DRAM, a type of DRAM that can run at much
higher clock speeds than conventional memory. SDRAM actually
synchronizes itself with the CPU's bus and is capable of running at 133
MHz,
DDR (Double Data Rate) is a technology used in some SDRAM memories
to increase the speed at which data can be written/retrieved from the
memory.
DDR increase the transfer rate by sending/receiving memory data twice
per clock cycle. This give a theoretical multiplication of transfer speed by
two.
DDR2-SDRAM maintains the same core functions, transferring 64 bits of data twice
every clock cycle for an effective transfer rate twice that of the front-side bus
(FSB) of a computer system, and an effective bandwidth equal to its speed x 8.
Flash Memory
A type of EEPROM.
Non-volatile – doesn’t require power to
hold data.
Data is written in blocks - not byte
accessible. Great for disk-like devices
requiring 4096 bytes to be read/written;
not good for Random Access Memory.
Limited to 1,000,000 cycles – and blocks
can go bad.
The controller can do bad block remapping
and error checking.
The controller can do wear leveling –
moving blocks around so that no one
area on the chip has excessive wear.
Selasa, 02 Juni 2009
Cache Memory
• The purpose of cache memory is to speed up
accesses by storing recently used data closer to the
CPU, instead of storing it in main memory.
• Although cache is much smaller than main memory,
its access time is a fraction of that of main memory.
• Unlike main memory, which is accessed by address,
cache is typically accessed by content; hence, it is
often called content addressable memory.
• Because of this, a single large cache memory isn’t
always desirable-- it takes longer to search.
• The “content” that is addressed in content
addressable cache memory is a subset of the bits of
a main memory address called a field.
• The fields into which a memory address is divided
provide a many-to-one mapping between larger
main memory and the smaller cache memory.
• Many blocks of main memory map to a single block
of cache. A tag field in the cache block
distinguishes one cached memory block from
another.
• The simplest cache mapping scheme is direct
mapped cache.
• In a direct mapped cache consisting of N blocks of
cache, block X of main memory maps to cache block
Y = X mod N.
• Thus, if we have 10 blocks of cache, block 7 of cache
may hold blocks 7, 17, 27, 37, . . . of main memory.
• Once a block of memory is copied into its slot in
cache, a valid bit is set for the cache block to let the
system know that the block contains valid data.
accesses by storing recently used data closer to the
CPU, instead of storing it in main memory.
• Although cache is much smaller than main memory,
its access time is a fraction of that of main memory.
• Unlike main memory, which is accessed by address,
cache is typically accessed by content; hence, it is
often called content addressable memory.
• Because of this, a single large cache memory isn’t
always desirable-- it takes longer to search.
• The “content” that is addressed in content
addressable cache memory is a subset of the bits of
a main memory address called a field.
• The fields into which a memory address is divided
provide a many-to-one mapping between larger
main memory and the smaller cache memory.
• Many blocks of main memory map to a single block
of cache. A tag field in the cache block
distinguishes one cached memory block from
another.
• The simplest cache mapping scheme is direct
mapped cache.
• In a direct mapped cache consisting of N blocks of
cache, block X of main memory maps to cache block
Y = X mod N.
• Thus, if we have 10 blocks of cache, block 7 of cache
may hold blocks 7, 17, 27, 37, . . . of main memory.
• Once a block of memory is copied into its slot in
cache, a valid bit is set for the cache block to let the
system know that the block contains valid data.
Senin, 11 Mei 2009
What is the Difference Between a Software Engineer and a Computer Programmer?
What is the Difference Between a Software Engineer and a Computer Programmer?
The terms software engineer or computer programmer may be confusing to the average computer user. Most of us associate computer programs with the generic term 'Software'. To us it may seem that the terms are interchangeable. That is not so. The role played by a software engineer is significantly different from that of a computer programmer. Before learning what the difference between a software engineer and a computer programmer is, let us see what is meant by the term software engineering and how it relates to computer programming.
Software engineering is a rigorous approach to development, maintenance and testing of software. These are engineers who must be knowledgeable about software requirements, design, development, maintenance and testing. They must be well versed with the tools and methods used for the development process as a whole. It is thus a convergence of the fields of computer science and systems engineering with a great deal of project management added for good measure. They are expected to have technical skills in addition to managerial skills.
A computer programmer on the other hand, is required to develop, test and maintain code that is to be run on the computer. He is responsible for converting the specifications provided in the software requirements definition phase into working code for the computer. Computer programmers are involved with design and maintenance of websites too. They should be proficient in analysis of programs. They are required to collaborate with manufacturers in developing new methodologies for software with evolution of hardware. Training, documentation and generation of reports are also tasks that should be handled by a computer programmer.
We can observe that the skill set required by a computer programmer is a subset of the skills expected from a software engineer. The computer programmer is a specialist in some areas covered by software engineering. A software engineer is in charge of the overall software development process and is expected to improve the reliability and maintainability of this complex process. A software engineer may have a team of computer programmers working under his supervision.
The terms software engineer or computer programmer may be confusing to the average computer user. Most of us associate computer programs with the generic term 'Software'. To us it may seem that the terms are interchangeable. That is not so. The role played by a software engineer is significantly different from that of a computer programmer. Before learning what the difference between a software engineer and a computer programmer is, let us see what is meant by the term software engineering and how it relates to computer programming.
Software engineering is a rigorous approach to development, maintenance and testing of software. These are engineers who must be knowledgeable about software requirements, design, development, maintenance and testing. They must be well versed with the tools and methods used for the development process as a whole. It is thus a convergence of the fields of computer science and systems engineering with a great deal of project management added for good measure. They are expected to have technical skills in addition to managerial skills.
A computer programmer on the other hand, is required to develop, test and maintain code that is to be run on the computer. He is responsible for converting the specifications provided in the software requirements definition phase into working code for the computer. Computer programmers are involved with design and maintenance of websites too. They should be proficient in analysis of programs. They are required to collaborate with manufacturers in developing new methodologies for software with evolution of hardware. Training, documentation and generation of reports are also tasks that should be handled by a computer programmer.
We can observe that the skill set required by a computer programmer is a subset of the skills expected from a software engineer. The computer programmer is a specialist in some areas covered by software engineering. A software engineer is in charge of the overall software development process and is expected to improve the reliability and maintainability of this complex process. A software engineer may have a team of computer programmers working under his supervision.
Langganan:
Postingan (Atom)
