static int nn_stream_send (struct nn_pipebase *self, struct nn_msg *msg) { struct nn_stream *stream; struct nn_iobuf iov [3]; stream = nn_cont (self, struct nn_stream, pipebase); /* Mave the message to the local storage. */ nn_msg_term (&stream->outmsg); nn_msg_mv (&stream->outmsg, msg); /* Serialise the message header. */ nn_putll (stream->outhdr, nn_chunkref_size (&stream->outmsg.hdr) + nn_chunkref_size (&stream->outmsg.body)); /* Start async sending. */ iov [0].iov_base = stream->outhdr; iov [0].iov_len = sizeof (stream->outhdr); iov [1].iov_base = nn_chunkref_data (&stream->outmsg.hdr); iov [1].iov_len = nn_chunkref_size (&stream->outmsg.hdr); iov [2].iov_base = nn_chunkref_data (&stream->outmsg.body); iov [2].iov_len = nn_chunkref_size (&stream->outmsg.body);; nn_usock_send (stream->usock, iov, 3); return 0; }
static int nn_stcp_send (struct nn_pipebase *self, struct nn_msg *msg) { struct nn_stcp *stcp; struct nn_iovec iov [3]; stcp = nn_cont (self, struct nn_stcp, pipebase); nn_assert (stcp); nn_assert_state (stcp, NN_STCP_STATE_ACTIVE); nn_assert (stcp->outstate == NN_STCP_OUTSTATE_IDLE); /* Move the message to the local storage. */ nn_msg_term (&stcp->outmsg); nn_msg_mv (&stcp->outmsg, msg); /* Serialise the message header. */ nn_putll (stcp->outhdr, nn_chunkref_size (&stcp->outmsg.sphdr) + nn_chunkref_size (&stcp->outmsg.body)); /* Start async sending. */ iov [0].iov_base = stcp->outhdr; iov [0].iov_len = sizeof (stcp->outhdr); iov [1].iov_base = nn_chunkref_data (&stcp->outmsg.sphdr); iov [1].iov_len = nn_chunkref_size (&stcp->outmsg.sphdr); iov [2].iov_base = nn_chunkref_data (&stcp->outmsg.body); iov [2].iov_len = nn_chunkref_size (&stcp->outmsg.body); nn_usock_send (stcp->usock, iov, 3); stcp->outstate = NN_STCP_OUTSTATE_SENDING; return 0; }
int nn_msgqueue_recv (struct nn_msgqueue *self, struct nn_msg *msg) { struct nn_msgqueue_chunk *o; /* If there is no message in the queue. */ if (nn_slow (!self->count)) return -EAGAIN; /* Move the message from the pipe to the user. */ nn_msg_mv (msg, &self->in.chunk->msgs [self->in.pos]); /* Move to the next position. */ ++self->in.pos; if (nn_slow (self->in.pos == NN_MSGQUEUE_GRANULARITY)) { o = self->in.chunk; self->in.chunk = self->in.chunk->next; self->in.pos = 0; if (nn_fast (!self->cache)) self->cache = o; else nn_free (o); } /* Adjust the statistics. */ --self->count; self->mem -= (nn_chunkref_size (&msg->hdr) + nn_chunkref_size (&msg->body)); return 0; }
static int nn_sipc_send (struct nn_pipebase *self, struct nn_msg *msg) { struct nn_sipc *sipc; struct nn_iovec iov [3]; sipc = nn_cont (self, struct nn_sipc, pipebase); nn_assert (sipc->state == NN_SIPC_STATE_ACTIVE); nn_assert (sipc->outstate == NN_SIPC_OUTSTATE_IDLE); /* Move the message to the local storage. */ nn_msg_term (&sipc->outmsg); nn_msg_mv (&sipc->outmsg, msg); /* Serialise the message header. */ sipc->outhdr [0] = NN_SIPC_MSG_NORMAL; nn_putll (sipc->outhdr + 1, nn_chunkref_size (&sipc->outmsg.hdr) + nn_chunkref_size (&sipc->outmsg.body)); /* Start async sending. */ iov [0].iov_base = sipc->outhdr; iov [0].iov_len = sizeof (sipc->outhdr); iov [1].iov_base = nn_chunkref_data (&sipc->outmsg.hdr); iov [1].iov_len = nn_chunkref_size (&sipc->outmsg.hdr); iov [2].iov_base = nn_chunkref_data (&sipc->outmsg.body); iov [2].iov_len = nn_chunkref_size (&sipc->outmsg.body); nn_usock_send (sipc->usock, iov, 3); sipc->outstate = NN_SIPC_OUTSTATE_SENDING; return 0; }
static int32_t nn_stcp_recv(struct nn_pipebase *self,struct nn_msg *msg) { struct nn_stcp *stcp; stcp = nn_cont(self,struct nn_stcp, pipebase); nn_assert_state(stcp, NN_STCP_STATE_ACTIVE); nn_assert(stcp->instate == NN_STCP_INSTATE_HASMSG); nn_msg_mv(msg,&stcp->inmsg); // Move received message to the user nn_msg_init(&stcp->inmsg,0); stcp->instate = NN_STCP_INSTATE_HDR; // Start receiving new message nn_usock_recv(stcp->usock,stcp->inhdr,sizeof(stcp->inhdr),NULL); return 0; }
static int nn_stream_recv (struct nn_pipebase *self, struct nn_msg *msg) { struct nn_stream *stream; stream = nn_cont (self, struct nn_stream, pipebase); /* Move message content to the user-supplied structure. */ nn_msg_mv (msg, &stream->inmsg); nn_msg_init (&stream->inmsg, 0); /* Start receiving new message. */ stream->instate = NN_STREAM_INSTATE_HDR; nn_usock_recv (stream->usock, stream->inhdr, 8); return 0; }
static int nn_slibfabric_recv (struct nn_pipebase *self, struct nn_msg *msg) { struct nn_slibfabric *slibfabric; slibfabric = nn_cont (self, struct nn_slibfabric, pipebase); nn_assert_state (slibfabric, NN_SLIBFABRIC_STATE_ACTIVE); nn_assert (slibfabric->instate == NN_SLIBFABRIC_INSTATE_HASMSG); /* Move received message to the user. */ nn_msg_mv (msg, &slibfabric->inmsg); nn_msg_init (&slibfabric->inmsg, 0); /* Start receiving new message. */ slibfabric->instate = NN_SLIBFABRIC_INSTATE_HDR; nn_usock_recv (slibfabric->usock, slibfabric->inhdr, sizeof (slibfabric->inhdr), NULL); return 0; }
static int nn_sipc_recv (struct nn_pipebase *self, struct nn_msg *msg) { struct nn_sipc *sipc; sipc = nn_cont (self, struct nn_sipc, pipebase); nn_assert_state (sipc, NN_SIPC_STATE_ACTIVE); nn_assert (sipc->instate == NN_SIPC_INSTATE_HASMSG); /* Move received message to the user. */ nn_msg_mv (msg, &sipc->inmsg); nn_msg_init (&sipc->inmsg, 0); /* Start receiving new message. */ sipc->instate = NN_SIPC_INSTATE_HDR; nn_usock_recv (sipc->usock, sipc->inhdr, sizeof (sipc->inhdr)); return 0; }
static int nn_req_recv (struct nn_sockbase *self, struct nn_msg *msg) { struct nn_req *req; req = nn_cont (self, struct nn_req, xreq.sockbase); /* No request was sent. Waiting for a reply doesn't make sense. */ if (nn_slow (req->state == NN_REQ_STATE_IDLE)) return -EFSM; /* If reply was not yet recieved, wait further. */ if (nn_slow (req->state != NN_REQ_STATE_RECEIVED)) return -EAGAIN; /* If the reply was already received, just pass it to the caller. */ nn_msg_mv (msg, &req->reply); req->state = NN_REQ_STATE_IDLE; return 0; }
static int nn_surveyor_send (struct nn_sockbase *self, struct nn_msg *msg) { struct nn_surveyor *surveyor; surveyor = nn_cont (self, struct nn_surveyor, xsurveyor.sockbase); /* Generate new survey ID. */ ++surveyor->surveyid; surveyor->surveyid |= 0x80000000; /* Tag the survey body with survey ID. */ nn_assert (nn_chunkref_size (&msg->sphdr) == 0); nn_chunkref_term (&msg->sphdr); nn_chunkref_init (&msg->sphdr, 4); nn_putl (nn_chunkref_data (&msg->sphdr), surveyor->surveyid); /* Store the survey, so that it can be sent later on. */ nn_msg_term (&surveyor->tosend); nn_msg_mv (&surveyor->tosend, msg); nn_msg_init (msg, 0); /* Cancel any ongoing survey, if any. */ if (nn_slow (nn_surveyor_inprogress (surveyor))) { /* First check whether the survey can be sent at all. */ if (!(nn_xsurveyor_events (&surveyor->xsurveyor.sockbase) & NN_SOCKBASE_EVENT_OUT)) return -EAGAIN; /* Cancel the current survey. */ nn_fsm_action (&surveyor->fsm, NN_SURVEYOR_ACTION_CANCEL); return 0; } /* Notify the state machine that the survey was started. */ nn_fsm_action (&surveyor->fsm, NN_SURVEYOR_ACTION_START); return 0; }
static int nn_req_recv (struct nn_sockbase *self, struct nn_msg *msg) { struct nn_req *req; req = nn_cont (self, struct nn_req, xreq.sockbase); /* No request was sent. Waiting for a reply doesn't make sense. */ if (nn_slow (!nn_req_inprogress (req))) return -EFSM; /* If reply was not yet recieved, wait further. */ if (nn_slow (req->state != NN_REQ_STATE_DONE)) return -EAGAIN; /* If the reply was already received, just pass it to the caller. */ nn_msg_mv (msg, &req->reply); nn_msg_init (&req->reply, 0); /* Notify the state machine. */ nn_fsm_action (&req->fsm, NN_REQ_ACTION_RECEIVED); return 0; }
static int nn_req_send (struct nn_sockbase *self, struct nn_msg *msg) { struct nn_req *req; req = nn_cont (self, struct nn_req, xreq.sockbase); /* Generate new request ID for the new request and put it into message header. The most important bit is set to 1 to indicate that this is the bottom of the backtrace stack. */ ++req->reqid; nn_assert (nn_chunkref_size (&msg->hdr) == 0); nn_chunkref_term (&msg->hdr); nn_chunkref_init (&msg->hdr, 4); nn_putl (nn_chunkref_data (&msg->hdr), req->reqid | 0x80000000); /* Store the message so that it can be re-sent if there's no reply. */ nn_msg_term (&req->request); nn_msg_mv (&req->request, msg); /* Notify the state machine. */ nn_fsm_action (&req->fsm, NN_REQ_ACTION_SENT); return 0; }
int nn_msgqueue_send (struct nn_msgqueue *self, struct nn_msg *msg) { size_t msgsz; /* By allowing one message of arbitrary size to be written to the queue, we allow even messages that exceed max buffer size to pass through. Beyond that we'll apply the buffer limit as specified by the user. */ msgsz = nn_chunkref_size (&msg->hdr) + nn_chunkref_size (&msg->body); if (nn_slow (self->count > 0 && self->mem + msgsz >= self->maxmem)) return -EAGAIN; /* Adjust the statistics. */ ++self->count; self->mem += msgsz; /* Move the content of the message to the pipe. */ nn_msg_mv (&self->out.chunk->msgs [self->out.pos], msg); ++self->out.pos; /* If there's no space for a new message in the pipe, either re-use the cache chunk or allocate a new chunk if it does not exist. */ if (nn_slow (self->out.pos == NN_MSGQUEUE_GRANULARITY)) { if (nn_slow (!self->cache)) { self->cache = nn_alloc (sizeof (struct nn_msgqueue_chunk), "msgqueue chunk"); alloc_assert (self->cache); self->cache->next = NULL; } self->out.chunk->next = self->cache; self->out.chunk = self->cache; self->cache = NULL; self->out.pos = 0; } return 0; }
/* Start sending a message. */ static int nn_sws_send (struct nn_pipebase *self, struct nn_msg *msg) { struct nn_sws *sws; struct nn_iovec iov [3]; int mask_pos; size_t sz; size_t hdrsz; uint8_t mask [4]; sws = nn_cont (self, struct nn_sws, pipebase); nn_assert_state (sws, NN_SWS_STATE_ACTIVE); nn_assert (sws->outstate == NN_SWS_OUTSTATE_IDLE); /* Move the message to the local storage. */ nn_msg_term (&sws->outmsg); nn_msg_mv (&sws->outmsg, msg); /* Compose the message header. See RFC 6455, section 5.2. */ /* Messages are always sent in a single fragment. They may be split up on the way to the peer though. */ sws->outhdr [0] = NN_WS_OPCODE_BINARY | NN_SWS_FRAME_BITMASK_FIN; hdrsz = 1; /* Frame the payload size. Don't set the mask bit yet. */ sz = nn_chunkref_size (&sws->outmsg.sphdr) + nn_chunkref_size (&sws->outmsg.body); if (sz <= 0x7d) { sws->outhdr [1] = (uint8_t) sz; hdrsz += 1; } else if (sz <= 0xffff) { sws->outhdr [1] = 0x7e; nn_puts (&sws->outhdr [2], (uint16_t) sz); hdrsz += 3; } else { sws->outhdr [1] = 0x7f; nn_putll (&sws->outhdr [2], (uint64_t) sz); hdrsz += 9; } /* Client-to-server communication has to be masked. See RFC 6455 5.3. */ if (sws->mode == NN_WS_CLIENT) { /* Generate 32-bit mask and store it in the frame. */ /* TODO: This is not a strong source of entropy. However, can we afford one wihout exhausting all the available entropy in the system at the high message rates? */ nn_random_generate (mask, 4); sws->outhdr [1] |= NN_SWS_FRAME_BITMASK_MASKED; memcpy (&sws->outhdr [hdrsz], mask, 4); hdrsz += 4; /* Mask payload, beginning with header and moving to body. */ /* TODO: This won't work if the message is shared among muliple transports. We probably want to send the message in multiple operations, masking only as much data at a time. */ mask_pos = 0; nn_sws_mask_payload (nn_chunkref_data (&sws->outmsg.sphdr), nn_chunkref_size (&sws->outmsg.sphdr), mask, &mask_pos); nn_sws_mask_payload (nn_chunkref_data (&sws->outmsg.body), nn_chunkref_size (&sws->outmsg.body), mask, &mask_pos); } /* Start async sending. */ iov [0].iov_base = sws->outhdr; iov [0].iov_len = hdrsz; iov [1].iov_base = nn_chunkref_data (&sws->outmsg.sphdr); iov [1].iov_len = nn_chunkref_size (&sws->outmsg.sphdr); iov [2].iov_base = nn_chunkref_data (&sws->outmsg.body); iov [2].iov_len = nn_chunkref_size (&sws->outmsg.body); nn_usock_send (sws->usock, iov, 3); sws->outstate = NN_SWS_OUTSTATE_SENDING; /* If a Close handshake was just sent, it's time to shut down. */ if ((sws->outhdr [0] & NN_SWS_FRAME_BITMASK_OPCODE) == NN_WS_OPCODE_CLOSE) { nn_pipebase_stop (&sws->pipebase); sws->state = NN_SWS_STATE_CLOSING_CONNECTION; } return 0; }