الأحد، 10 يوليو 2011



Multimedia  is any computer based presentation or application that integrates different forms of media together, that incorporates text, graphics, sound, animation, and video. Multimedia can be classified as Interactive and Non-interactive.
Also, Computer-based Multimedia has two or more media such as:
1-Computer multimedia which contains from:
       -Multi-sensory experience like real world.
       -Multi-sensory memory imprints.
       -Different learning styles benefit.
2-Hypertext such as links.
    3-Hypermedia like hypermedia ware. Which is based on        cognitive theories of how people structure knowledge and how they learn.
Hypermedia has many  Applications such as:
1-Instructional courseware like:
n Appropriately introduced.
n Follow-up activities.
  2-Teacher and students’ own creations.
Multimedia has many Advantages such as:
w  Engrossing – deep involvement
w  Multi-sensory
w  Creates knowledge connections
w  Individualized
w  Teacher and student creation
Also, it has disadvantages:
w  “Lost in cyberspace”
w  Lack of structure
w  Non-interactive – if one-way, no feedback
w  Text intensive content
w  Complex to create
w  Time consuming
w  Cognitive overload
w  Linear content

Major Categories of Multimedia Titles are:
1-Entertainment
2-Education
3-Corporate communications
4-Reference


Internet Protocol IP
     The Internet Protocol (IP) is the principal communications protocol used for relaying datagrams (packets) across an internetwork using the Internet Protocol Suite. Responsible for routing packets across network boundaries, it is the primary protocol that establishes the Internet.
IP is the primary protocol in the Internet Layer of the Internet Protocol Suite and has the task of delivering datagrams from the source host to the destination host solely based on their addresses. For this purpose, IP defines addressing methods and structures for datagram encapsulation.
Historically, IP was the connectionless datagram service in the original Transmission Control Program introduced by Vint Cerf and Bob Kahn in 1974, the other being the connection-oriented Transmission Control Protocol (TCP). The Internet Protocol Suite is therefore often referred to as TCP/IP.
The Internet Protocol is responsible for addressing hosts and routing datagrams (packets) from a source host to the destination host across one or more IP networks. For this purpose the Internet Protocol defines an addressing system that has two functions. Addresses identify hosts and provide a logical location service. Each packet is tagged with a header that contains the meta-data for the purpose of delivery. This process of tagging is also called encapsulation.
The design principles of the Internet protocols assume that the network infrastructure is inherently unreliable at any single network element or transmission medium and that it is dynamic in terms of availability of links and nodes. No central monitoring or performance measurement facility exists that tracks or maintains the state of the network. For the benefit of reducing network complexity, the intelligence in the network is purposely mostly located in the end nodes of each data transmission, cf. end-to-end principle. Routers in the transmission path simply forward packets to the next known local gateway matching the routing prefix for the destination address.



The only assistance that the Internet Protocol provides in Version 4 (IPv4) is to ensure that the IP packet header is error-free through computation of a checksum at the routing nodes. This has the side-effect of discarding packets with bad headers on the spot. In this case no notification is required to be sent to either end node, although a facility exists in the Internet Control Message Protocol (ICMP) to do so.
IPv6, on the other hand, has abandoned the use of IP header checksums for the benefit of rapid forwarding through routing elements in the network.
The resolution or correction of any of these reliability issues is the responsibility of an upper layer protocol. For example, to ensure in-order delivery the upper layer may have to cache data until it can be passed to the application.
In addition to issues of reliability, this dynamic nature and the diversity of the Internet and its components provide no guarantee that any particular path is actually capable of, or suitable for, performing the data transmission requested, even if the path is available and reliable. One of the technical constraints is the size of data packets allowed on a given link. An application must assure that it uses proper transmission characteristics. Some of this responsibility lies also in the upper layer protocols between application and IP. Facilities exist to examine the maximum transmission unit (MTU) size of the local link, as well as for the entire projected path to the destination when using IPv6. The IPv4 internetworking layer has the capability to automatically fragment the original datagram into smaller units for transmission. In this case, IP does provide re-ordering of fragments delivered out-of-order.