Skip to content

aberke/ip_tcp

Repository files navigation

********************************** TCP ********************************************************************

HOW TO RUN:
The make-file has a lot of targets in it for help with debugging and running the program. 
In order to build everything, type make build/make rebuild (which will clean first). 
In order to run the tests, type make test. 
To run a single node (A.lnx) through valgrind, type make valgrind. 


QUITTING NODE:

We send an rst before deleting the TCB from the node (although this disagrees with the implementation which seems to be under the assumption that the user should commit to running a tcp node FOREVER).


DESIGN DECISIONS

== The State Machine ==

The state machine was a way for us to abstract away from the state transitions necessary to implement TCP. Each connection in our node has its own state machine, which has a pointer back to the connection in order to invoke the appropriate callback functions when a transition is made. These 'callback' functions are set when the state machine is initialized, and we can simply pass transitions to the state machine (such as received-valid-syn) and then the state machine will react accordingly. The state machine is simply a transition matrix that records in Aij the NEXT state that should be transitioned to while in state i and having received transition j. 


== Threading Rationale == 

There are several crucial blocking queues that justify our threading rationale. Our basic use of the blocking queue is as a consumer/producer relationship. While tcp connections will produce packets that the ip node should consume and send, the reverse will also be true for incoming packets. Bqueue_ts are used for each direction in this relationship. That being said, we have a thread running the IP node, waiting for incoming packets on its interfaces and pushing them onto a queue for the tcp_node to consume. The tcp node is then sitting in its own thread, pulling off the packets that ip_node pushes to it. These packets are then multiplexed in order to find which connection they are destined for. The connections themselves are also run in their own individual threads, where each thread is handling reading/writing for all of its functionality. The writing queue that the connection pushes too is the same queue that the IP node pulls from, and this completes the loop. 


== Windows == 

The send window and receive window were built separately, and are maintained in their own files. They are built as abstractions over the sliding window protocol and simply provide the functionality without fully mechanizing the process. The benefit of this is that they are more modular, and easier to test. However, many of the difficulties we had throughout the process of debugging were related to the operation of the windows because of their abstractness. In retrospect, it would have been much more useful to us had the windows been implemented with a more direct purpose in mind (ie less modular). 


KNOWN BUGS:

One of the major known bugs occurs when sending large files using recvfile/sendfile. Although it appears that we are able to do so efficiently and at relatively high speeds, these speeds asymptotically decline and lead to a perpetual exchange of unacked/resent chunks of data. 

Another bug is related to the initial connection of two agents with highly lossy links. Although once connected, transmission works flawlessly, the initial connection seems to be difficult due a problem with IP. 


DESIGN-SKETCH:

[[ packet in ]]

1. src/ip/ip_node.c:802 :: _handle_selected 
	-> char* packet = malloc()
	-> call (2)

2. src/ip/link_interface.c:187 	:: link_interface_read_packet
	-> recvfrom(packet) reads into
		the packet malloc()ed in (1)
	
3. src/ip/ip_node.c:802 :: _handle_selected 
	-> unwrap packet to get the payload,
		then this packet is free()ed
	-> memcpy()ed into packet_unwrapped

4. src/ip/ip_node.c:241 :: ip_node_read 
	-> tcp_packet_data init()ed whose
		char* packet is a pointer to
		the packet_unwrapped from (3)
	-> onto ip_node's read_queue

5. src/tcp/tcp_node.c:585 :: tcp_node_start
	-> tcp_packet_data dequeued from read_queue
	-> call _handle_packet()

6. src/tcp/tcp_node.c:750 :: _handle_packet
	||
	|-> if connection not valid, free packet, return
	||
	|-> otherwise pass it to the connection and call
		tcp_connection_queue_to_read()

7. src/tcp/tcp_connection.c:227 :: tcp_connection_queue_to_read
	-> enqueue the packet onto the read queue

8. src/tcp/tcp_connection.c:580 :: _handle_read_send
	-> dequeue the packet
	-> call tcp_connection_handle_receive_packet

9. src/tcp/tcp_connection.c:255 :: tcp_connection_handle_receive_packet
	-> tcp_unwrap_data memcpy()s the data
		into a new buffer, which is then
		passed off to the receive window
		which will handle freeing that data
	-> the rest of the packet is destroyed
		at the bottom of this function

********************************** End of TCP ********************************************************************
///////////////////////////////////////////////////////////////////////////////////////////
********************************** IP ********************************************************************

HOW TO RUN:
The make-file has a lot of targets in it for help with debugging and running the program. 
In order to build everything, type make build/make rebuild (which will clean first). 
In order to run the tests, type make test. 
To run a single node (A.lnx) through valgrind, type make valgrind. 
To run the network (loop.net), type make runNetwork. In order to change any of these default args, just go into the makefile and change the variables titled DEFAULT_ARGS, and DEFAULT_NETWORK_ARGS.


ARCHITECTURE (design decisions):

-- Flow of control -- 

    Our node operates on a single thread.  Main initiates the node with the list of links, at which point node creates a 
    'link_interface_t', which is our link abstraction struct, for each link, and stores an array of its link_interfaces.
    Our node also stores two hashmaps, one mapping socket file descriptor to link_interface, and one mapping virtual ip address
    to link_interface.
    
    After initialization, main tells the node to start running.  Node queries information from its interfaces and initalizes
    its routing and forwarding tables with the information of its local addresses.  Node sends out a RIP request.
    Node then goes into a loop where at each pass through the loop it:
        updates its its list of socket file descriptors for select()
        uses select to listen for packets on each of its links (and stdin).  Select blocking times out after 1 second.
        When it breaks out, it first handles reading from stdin.
        It then again queries its interfaces to see if any have gone up or down -- in which case routing/forwarding tables
            must be updated.  
        It then uses its hashmap mapping socket file descriptors to link_interfaces to identify which link_interfaces
        need to read.  Reading from the link_interface is handled.
        
        It then checks if 5 seconds have passed.  If this is true, it sends out an RIP resonds on each of its link_interfaces

--interfaces--

    Our abstraction for the link layer was to define structs link_interface_t to handle all of the UDP protocol.
    When a node comes online, it creates a link_interface_t for each of its links.  The link_interface_t stores and handles
    all the information necessary to reach and receive messages from other links on other nodes.

    A link_interface_t knows the following information and has functions that wrap around this information to handle it for the node:
        It's own personal id -- id corresponds to index where link_interface_t is stored in ip_node's array of link_interface_t's
        The socket file descriptor on which it is listening
        Boolean as to whether it is 'up' or 'down'
        It's own local virtual ip
        It's remote virtual ip -- the virtual ip on the other side of the link
        sockaddr information necessary for reading and sending on the link -- its send and receive functions wrap around these

    If a link_interface_t is instructed to send or receive a packet by the node and the link is down,
    (perhaps the node's routing/forwarding tables have not yet been updated after the link went down), the link_interface_t drops
    the packet.

    Once the node initializes an interface, it no longers knows about any of those fields itself -- it delets the list of links
    and just stores an array (and necessary hashmaps) of link_interface_t's that handle reading and sending for their given link.

TESTING: 

We used several means of testing our project. The first one was for unit tests. For this, we wrote a testing harness under test/test.c that uses several homemade testing macros that allows us to test individual aspects of our project, and pretty prints the results upon failures, etc. This was helpful for testing the project in incremental steps, although was not helpful for testing comprehensively. For more comprehensive testing, we wrote a python script that inits a UDP socket and tries to communicate with our program (in test/pyLink.py and test/pyUtils.py). These helped a lot for some of the earlier stages of debugging, but later on we had to manually test the entire network in order to determine that the routing tables were updating correctly, and that the shortest path was taken, as well as that changes in topology were percolated accurately throughout the network. 

BUGS:
There are no known bugs in our project


About

Brown cs168 partner repo

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages