From 49829982ba955fea91ad56cc05fd526eab106f14 Mon Sep 17 00:00:00 2001 From: Zhijiang Hu Date: Mon, 31 Jul 2017 19:10:57 +0800 Subject: Jira: DAISY-36 Update doc structure This PS update doc structure according to [1]. Note: This PS also add content to doc for describing the mapping methods for map role to discovered nodes. [1] http://docs.opnfv.org/en/stable-danube/how-to-use-docs/documentation-guide.html?highlight=templates#document-structure-and-contribution Change-Id: I7b2ef916753cddd8cd845abae8c7d5865c49e1ac Signed-off-by: Zhijiang Hu --- docs/configurationguide/index.rst | 16 -- docs/configurationguide/installerconfig.rst | 15 -- docs/developer/design/index.rst | 16 -- docs/developer/design/multicast.rst | 278 ---------------------- docs/developer/spec/multicast.rst | 190 --------------- docs/development/design/index.rst | 16 ++ docs/development/design/multicast.rst | 278 ++++++++++++++++++++++ docs/development/requirements/multicast.rst | 190 +++++++++++++++ docs/installationprocedure/bmdeploy.rst | 144 ----------- docs/installationprocedure/index.rst | 18 -- docs/installationprocedure/installation_guide.rst | 168 ------------- docs/installationprocedure/introduction.rst | 23 -- docs/installationprocedure/vmdeploy.rst | 144 ----------- docs/release/configguide/index.rst | 16 ++ docs/release/configguide/installerconfig.rst | 15 ++ docs/release/installation/bmdeploy.rst | 150 ++++++++++++ docs/release/installation/index.rst | 18 ++ docs/release/installation/installation_guide.rst | 168 +++++++++++++ docs/release/installation/introduction.rst | 23 ++ docs/release/installation/vmdeploy.rst | 150 ++++++++++++ docs/release/release-notes/index.rst | 18 ++ docs/release/release-notes/release-notes.rst | 140 +++++++++++ docs/releasenotes/index.rst | 18 -- docs/releasenotes/release-notes.rst | 140 ----------- 24 files changed, 1182 insertions(+), 1170 deletions(-) delete mode 100644 docs/configurationguide/index.rst delete mode 100644 docs/configurationguide/installerconfig.rst delete mode 100644 docs/developer/design/index.rst delete mode 100644 docs/developer/design/multicast.rst delete mode 100644 docs/developer/spec/multicast.rst create mode 100644 docs/development/design/index.rst create mode 100644 docs/development/design/multicast.rst create mode 100644 docs/development/requirements/multicast.rst delete mode 100644 docs/installationprocedure/bmdeploy.rst delete mode 100644 docs/installationprocedure/index.rst delete mode 100644 docs/installationprocedure/installation_guide.rst delete mode 100644 docs/installationprocedure/introduction.rst delete mode 100644 docs/installationprocedure/vmdeploy.rst create mode 100644 docs/release/configguide/index.rst create mode 100644 docs/release/configguide/installerconfig.rst create mode 100644 docs/release/installation/bmdeploy.rst create mode 100644 docs/release/installation/index.rst create mode 100644 docs/release/installation/installation_guide.rst create mode 100644 docs/release/installation/introduction.rst create mode 100644 docs/release/installation/vmdeploy.rst create mode 100644 docs/release/release-notes/index.rst create mode 100644 docs/release/release-notes/release-notes.rst delete mode 100644 docs/releasenotes/index.rst delete mode 100644 docs/releasenotes/release-notes.rst diff --git a/docs/configurationguide/index.rst b/docs/configurationguide/index.rst deleted file mode 100644 index 7b531f45..00000000 --- a/docs/configurationguide/index.rst +++ /dev/null @@ -1,16 +0,0 @@ -.. This document is protected/licensed under the following conditions -.. (c) Sun Jing (ZTE corporation) -.. Licensed under a Creative Commons Attribution 4.0 International License. -.. You should have received a copy of the license along with this work. -.. If not, see . - -***************************** -Release notes for Daisy4nfv -***************************** - -.. toctree:: - :numbered: - :maxdepth: 2 - - installerconfig.rst - diff --git a/docs/configurationguide/installerconfig.rst b/docs/configurationguide/installerconfig.rst deleted file mode 100644 index 795f6028..00000000 --- a/docs/configurationguide/installerconfig.rst +++ /dev/null @@ -1,15 +0,0 @@ - -.. This document is protected/licensed under the following conditions -.. (c) Sun Jing (ZTE corporation) -.. Licensed under a Creative Commons Attribution 4.0 International License. -.. You should have received a copy of the license along with this work. -.. If not, see . - - -======== -Abstract -======== - -This document compiles the release notes for the D 2.0 release of -OPNFV when using Daisy4nfv as a deployment tool. - diff --git a/docs/developer/design/index.rst b/docs/developer/design/index.rst deleted file mode 100644 index bc5e9f40..00000000 --- a/docs/developer/design/index.rst +++ /dev/null @@ -1,16 +0,0 @@ -.. This document is protected/licensed under the following conditions -.. (c) Sun Jing (ZTE corporation) -.. Licensed under a Creative Commons Attribution 4.0 International License. -.. You should have received a copy of the license along with this work. -.. If not, see . - -************************* -Design Docs for Daisy4nfv -************************* - -.. toctree:: - :numbered: - :maxdepth: 2 - - multicast.rst - diff --git a/docs/developer/design/multicast.rst b/docs/developer/design/multicast.rst deleted file mode 100644 index 89422fe6..00000000 --- a/docs/developer/design/multicast.rst +++ /dev/null @@ -1,278 +0,0 @@ -Detailed Design -=============== - -Protocol Design ---------------- - -1. All Protocol headers are 1 byte long or align to 4 bytes. -2. Packet size should not exceed above 1500(MTU) bytes including UDP/IP header and should -be align to 4 bytes. In future, MTU can be modified larger than 1500(Jumbo Frame) through -cmd line option to enlarge the data throughput. - -/* Packet header definition (align to 4 bytes) */ -struct packet_ctl { - uint32_t seq; // packet seq number start from 0, unique in server life cycle. - uint32_t crc; // checksum - uint32_t data_size; // payload length - uint8_t data[0]; -}; - -/* Buffer info definition (align to 4 bytes) */ -struct buffer_ctl { - uint32_t buffer_id; // buffer seq number start from 0, unique in server life cycle. - uint32_t buffer_size; // payload total length of a buffer - uint32_t packet_id_base; // seq number of the first packet in this buffer. - uint32_t pkt_count; // number of packet in this buffer, 0 means EOF. -}; - - -3. 1-byte-long header definition - -Signals such as the four below are 1 byte long, to simplify the receive process(since it -cannot be spitted ). - -#define CLIENT_READY 0x1 -#define CLIENT_REQ 0x2 -#define CLIENT_DONE 0x4 -#define SERVER_SENT 0x8 - -Note: Please see the collaboration diagram for their meanings. - -4. Retransmission Request Header - -/* Retransmition Request Header (align to 4 bytes) */ -struct request_ctl { - uint32_t req_count; // How many seqs below. - uint32_t seqs[0]; // packet seqs. -}; - -5. Buffer operations - -void buffer_init(); // Init the buffer_ctl structure and all(say 1024) packet_ctl -structures. Allocate buffer memory. -long buffer_fill(int fd); // fill a buffer from fd, such as stdin -long buffer_flush(int fd); // flush a buffer to fd, say stdout -struct packet_ctl *packet_put(struct packet_ctl *new_pkt);// put a packet to a buffer -and return a free memory slot for the next packet. -struct packet_ctl *packet_get(uint32_t seq);// get a packet data in buffer by -indicating the packet seq. - - -How to sync between server threads ----------------------------------- - -If children's aaa() operation need to wait the parents's init() to be done, then do it -literally like this: - - UDP Server - TCP Server1 = spawn( )----> TCP Server1 - init() - TCP Server2 = spawn( )-----> TCP Server2 - V(sem)----------------------> P(sem) // No child any more - V(sem)---------------------> P(sem) - aaa() // No need to V(sem), for no child - aaa() - -If parent's send() operation need to wait the children's ready() done, then do it -literally too, but is a reverse way: - - UDP Server TCP Server1 TCP Server2 - // No child any more - ready() ready() - P(sem) <--------------------- V(sem) - P(sem) <------------------ V(sem) - send() - -Note that the aaa() and ready() operations above run in parallel. If this is not the -case due to race condition, the sequence above can be modified into this below: - - UDP Server TCP Server1 TCP Server2 - // No child any more - ready() - P(sem) <--------------------- V(sem) - ready() - P(sem) <------------------- V(sem) - send() - - -In order to implement such chained/zipper sync pattern, a pair of semaphores is -needed between the parent and the child. One is used by child to wait parent , the -other is used by parent to wait child. semaphore pair can be allocated by parent -and pass the pointer to the child over spawn() operation such as pthread_create(). - -/* semaphore pair definition */ -struct semaphores { - sem_t wait_parent; - sem_t wait_child; -}; - -Then the semaphore pair can be recorded by threads by using the semlink struct below: -struct semlink { - struct semaphores *this; /* used by parent to point to the struct semaphores - which it created during spawn child. */ - struct semaphores *parent; /* used by child to point to the struct - semaphores which it created by parent */ -}; - -chained/zipper sync API: - -void sl_wait_child(struct semlink *sl); -void sl_release_child(struct semlink *sl); -void sl_wait_parent(struct semlink *sl); -void sl_release_parent(struct semlink *sl); - -API usage is like this. - -Thread1(root parent) Thread2(child) Thread3(grandchild) -sl_wait_parent(noop op) -sl_release_child - +---------->sl_wait_parent - sl_release_child - +-----------> sl_wait_parent - sl_release_child(noop op) - ... - sl_wait_child(noop op) - + sl_release_parent - sl_wait_child <------------- - + sl_release_parent -sl_wait_child <------------ -sl_release_parent(noop op) - -API implementation: - -void sl_wait_child(struct semlink *sl) -{ - if (sl->this) { - P(sl->this->wait_child); - } -} - -void sl_release_child(struct semlink *sl) -{ - if (sl->this) { - V(sl->this->wait_parent); - } -} - -void sl_wait_parent(struct semlink *sl) -{ - if (sl->parent) { - P(sl->parent->wait_parent); - } -} - -void sl_release_parent(struct semlink *sl) -{ - if (sl->parent) { - V(sl->parent->wait_child); - } -} - -Client flow chart ------------------ -See Collaboration Diagram - -UDP thread flow chart ---------------------- -See Collaboration Diagram - -TCP thread flow chart ---------------------- - - -S_INIT --- (UDP initialized) ---> S_ACCEPT --- (accept clients) --+ - | - /----------------------------------------------------------------/ - V -S_PREP --- (UDP prepared abuffer) - ^ | - | \--> S_SYNC --- (clients ClIENT_READY) - | | - | \--> S_SEND --- (clients CLIENT_DONE) - | | - | V - \---------------(bufferctl.pkt_count != 0)-----------------------+ - | - V - exit() <--- (bufferctl.pkt_count == 0) - - -TCP using poll and message queue --------------------------------- - -TCP uses poll() to sync with client's events as well as output event from itself, so -that we can use non-block socket operations to reduce the latency. POLLIN means there -are message from client and POLLOUT means we are ready to send message/retransmission -packets to client. - -poll main loop pseudo code: -void check_clients(struct server_status_data *sdata) -{ - poll_events = poll(&(sdata->ds[1]), sdata->ccount - 1, timeout); - - /* check all connected clients */ - for (sdata->cindex = 1; sdata->cindex < sdata->ccount; sdata->cindex++) { - ds = &(sdata->ds[sdata->cindex]); - if (!ds->revents) { - continue; - } - - if (ds->revents & (POLLERR|POLLHUP|POLLNVAL)) { - handle_error_event(sdata); - } else if (ds->revents & (POLLIN|POLLPRI)) { - handle_pullin_event(sdata); // may set POLLOUT into ds->events - // to trigger handle_pullout_event(). - } else if (ds->revents & POLLOUT) { - handle_pullout_event(sdata); - } - } -} - -For TCP, since the message from client may not complete and send data may be also -interrupted due to non-block fashion, there should be one send message queue and a -receive message queue on the server side for each client (client do not use non-block -operations). - -TCP message queue definition: - -struct tcpq { - struct qmsg *head, *tail; - long count; /* message count in a queue */ - long size; /* Total data size of a queue */ -}; - -TCP message queue item definition: - -struct qmsg { - struct qmsg *next; - void *data; - long size; -}; - -TCP message queue API: - -// Allocate and init a queue. -struct tcpq * tcpq_queue_init(void); - -// Free a queue. -void tcpq_queue_free(struct tcpq *q); - -// Return queue length. -long tcpq_queue_dsize(struct tcpq *q); - -// queue new message to tail. -void tcpq_queue_tail(struct tcpq *q, void *data, long size); - -// queue message that cannot be sent currently back to queue head. -void tcpq_queue_head(struct tcpq *q, void *data, long size); - -// get one piece from queue head. -void * tcpq_dequeue_head(struct tcpq *q, long *size); - -// Serialize all pieces of a queue, and move it out of queue, to ease the further -//operation on it. -void * tcpq_dqueue_flat(struct tcpq *q, long *size); - -// Serialize all pieces of a queue, do not move it out of queue, to ease the further -//operation on it. -void * tcpq_queue_flat_peek(struct tcpq *q, long *size); diff --git a/docs/developer/spec/multicast.rst b/docs/developer/spec/multicast.rst deleted file mode 100644 index ba314d3a..00000000 --- a/docs/developer/spec/multicast.rst +++ /dev/null @@ -1,190 +0,0 @@ -Requirement -=========== -1. When deploying a large OPNFV/OpenStack cluster, we would like to take the advantage of UDP -multicast to prevent the network bottleneck when distributing Kolla container from one -Installer Server to all target hosts by using unicast. - -2. When it comes to auto scaling (extension) of compute nodes, use unicast is acceptable, since -the number of nodes in this condition is usually small. - -The basic step to introduce multicast to deployment is: -a. Still setup the monopolistic docker registry server on Daisy server as a failsafe. -b. Daisy server, as the multicast server, prepares the image file to be transmitted, and count -how many target hosts(as the multicast clients)that should receive the image file -simultaneously. -c. Multicast clients tell the multicast server about ready to receive the image. -d. Multicast server transmits image over UDP multicast channel. -e. Multicast clients report success after received the whole image. -f. Setup docker registry server on each target hosts based upon received docker image. -g. Setup Kolla ansible to use 127.0.0.1 as the registry server IP so that the real docker -container retrieving network activities only take place inside target hosts. - - -Design -====== - -Methods to achieve ------------------- - -TIPC -++++ - -TIPC or its wrapper such as ZeroMQ is good at multicast, but it is not suitable as an -installer: -1. The default TIPC kernel module equipped by CentOS7(kernel verison 3.10) is NOT stable -especially in L3 multicast(although we can use L2 multicast, but the network will be limited to -L2). If errors happen, it is hard for us to recover a node from kernel panic. - -2. TIPC's design is based on a stable node cluster environment, esp in Lossless Ethernet. But -the real environment is generally not in that case. When multicast is broken, Installer should -switch to unicast, but TIPC currently do not have such capability. - -Top level design ----------------- -1. There are two kinds of thread on the server side, one is UDP multicast thread the other is -TCP sync/retransmit thread. There will be more than one TCP threads since one TCP thread can -only serve a limited client (say 64~128) in order to limit the CPU load and unicast retransmit -network usage. - -2. There is only one thread on client side. - -3. All the packets that a client lost during UDP multicast will be request by client to the TCP -thread and resend by using TCP unicast, if unicast still cannot deliver the packets successfully, -the client will failback to using the monopolistic docker registry server on Daisy server as a -failsafe option. - -4. Each packet needs checksum. - - -UDP Server Design (runs on Daisy Server) ----------------------------------------- - -1. Multicast group IP and Port should be configurable, as well as the interface that will be -used as the egress of the multicast packets. The user will pass the interface's IP as the -handle to find the egress. - -2. Image data to be sent is passed to server through stdin. - -3. Consider the size of image is large (xGB), the server cannot pre-allocate whole buffer to -hold all image at once. Besides, since the data is from stdin and the actual length is -unpredictable. So the server should split the data into small size buffers and send to the -clients one by one. Furthermore, buffer shall be divided into packets which size is MTU -including the UDP/IP header. Then the buffer size can be , for example 1024 * MTU including the -UDP/IP header. - -4. After sending one buffer to client the server should stop and get feedback from client to -see if all clients have got all packets in that buffer. If any clients lost any buffer, client -should request the server to resend packets from a more stable way(TCP). - -5. when got the EOF from stdin, server should send a buffer which size is 0 as an EOF signal to -the client to let it know about the end of sending. - - -TCP Server Design (runs on Daisy Server) ----------------------------------------- - -1. All TCP server threads and the only one UDP thread share one process. The UDP thread is the -parent thread, and the first TCP thread is the child, while the second TCP thread is the -grandchild, and so on. Thus, for each TCP thread, there is only one parent and at most one -child. - -2. TCP thread accepts the connect request from client. The number of client is predefined by -server cmdline parameter. Each TCP thread connect with at most ,say 64 clients, if there are -more clients to be connected to, then a child TCP thread is spawned by the parent. - -3. Before UDP thread sending any buffer to client, all TCP threads should send UDP multicast -IP/Port information to their clients beforehand. - -4. During each buffer sending cycle, TCP threads send a special protocol message to tell -clients about the size/id of the buffer and id of each packet in it. After getting -acknowledgements from all clients, TCP threads then signal the UDP thread to start -multicasting buffer over UDP. After multicasting finished, TCP threads notifies clients -multicast is done, and wait acknowledgements from clients again. If clients requests -retransmission, then it is the responsibility of TCP threads to resend packets over unicast. -If no retransmission needed, then clients should signal TCP threads that they are ready for -the next buffer to come. - -5. Repeat step 4 if buffer size is not 0 in the last round, otherwise, TCP server shutdown -connection and exit. - - -Server cmdline usage example ----------------------------- - -./server [port] < kolla_image.tgz - - is used here to specify the multicast egress interface. But which interface will be -used by TCP is leaved to route table to decide. - indicates the number of clients , thus the number of target hosts which -need to receive the image. -[port] is the port that will be used by both UDP and TCP. Default value can be used if user -does not provide it. - - -Client Design(Target Host side) --------------------------------- - -1. Each target hosts has only one client process. - -2. Client connect to TCP server according to the cmdline parameters right after start up. - -3. After connecting to TCP server, client first read from TCP server the multicast group -information which can be used to create the multicast receive socket then. - -4. During each buffer receiving cycle, the client first read from TCP server the buffer info, -prepare the receive buffer, and acknowledge the TCP server that it is ready to receive. Then, -client receive buffer from the multicast socket until TCP server notifying the end of -multicast. By compare the buffer info and the received packets, the client knows whether to -send the retransmission request or not and whether to wait retransmission packet or not. -After all packets are received from UDP/TCP, the client eventually flush buffer to stdout -and tells the TCP server about ready to receive the next buffer. - -5. Repeat step 4 if buffer size is not 0 in the last round, otherwise, client shutdowns -connection and exit. - -Client cmdline usage example ----------------------------- - -./client [port] > kolla_image.tgz - - is used here to specify the multicast ingress interface. But which interface -will be used by TCP is leaved to route table to decide. - indicates the TCP server IP to be connected to. -[port] is the port that will be used by both connect to TCP server and receive multicast -data. - - -Collaboration diagram among UDP Server, TCP Server(illustrate only one TCP thread) -and Clients: - - -UDP Server TCP Server Client - | | | -init mcast group -init mcast send socket - ----------------------------------> - accept clients - <------------------------connet------------------ - --------------------send mcast group info-------> - <---------------------------------- - state = PREP -do { -read data from stdin -prepare one buffer - -----------------------------------> - state = SYNC - -------------------send buffer info--------------> - <----------------------send ClIENT_READY----------- - <---------------------------------- - state = SEND - - ================================================send buffer over UDP multicast======> - -----------------------------------> - -----------------------send SERVER_SENT-----------> - [<-------------------send CLIENT_REQUEST----------] - [--------------send buffer over TCP unicast------>] - flush buffer to stdout - <-------------------send CLIENT_DONE--------------- - <---------------------------------- - state = PREP -while (buffer.len != 0) diff --git a/docs/development/design/index.rst b/docs/development/design/index.rst new file mode 100644 index 00000000..bc5e9f40 --- /dev/null +++ b/docs/development/design/index.rst @@ -0,0 +1,16 @@ +.. This document is protected/licensed under the following conditions +.. (c) Sun Jing (ZTE corporation) +.. Licensed under a Creative Commons Attribution 4.0 International License. +.. You should have received a copy of the license along with this work. +.. If not, see . + +************************* +Design Docs for Daisy4nfv +************************* + +.. toctree:: + :numbered: + :maxdepth: 2 + + multicast.rst + diff --git a/docs/development/design/multicast.rst b/docs/development/design/multicast.rst new file mode 100644 index 00000000..89422fe6 --- /dev/null +++ b/docs/development/design/multicast.rst @@ -0,0 +1,278 @@ +Detailed Design +=============== + +Protocol Design +--------------- + +1. All Protocol headers are 1 byte long or align to 4 bytes. +2. Packet size should not exceed above 1500(MTU) bytes including UDP/IP header and should +be align to 4 bytes. In future, MTU can be modified larger than 1500(Jumbo Frame) through +cmd line option to enlarge the data throughput. + +/* Packet header definition (align to 4 bytes) */ +struct packet_ctl { + uint32_t seq; // packet seq number start from 0, unique in server life cycle. + uint32_t crc; // checksum + uint32_t data_size; // payload length + uint8_t data[0]; +}; + +/* Buffer info definition (align to 4 bytes) */ +struct buffer_ctl { + uint32_t buffer_id; // buffer seq number start from 0, unique in server life cycle. + uint32_t buffer_size; // payload total length of a buffer + uint32_t packet_id_base; // seq number of the first packet in this buffer. + uint32_t pkt_count; // number of packet in this buffer, 0 means EOF. +}; + + +3. 1-byte-long header definition + +Signals such as the four below are 1 byte long, to simplify the receive process(since it +cannot be spitted ). + +#define CLIENT_READY 0x1 +#define CLIENT_REQ 0x2 +#define CLIENT_DONE 0x4 +#define SERVER_SENT 0x8 + +Note: Please see the collaboration diagram for their meanings. + +4. Retransmission Request Header + +/* Retransmition Request Header (align to 4 bytes) */ +struct request_ctl { + uint32_t req_count; // How many seqs below. + uint32_t seqs[0]; // packet seqs. +}; + +5. Buffer operations + +void buffer_init(); // Init the buffer_ctl structure and all(say 1024) packet_ctl +structures. Allocate buffer memory. +long buffer_fill(int fd); // fill a buffer from fd, such as stdin +long buffer_flush(int fd); // flush a buffer to fd, say stdout +struct packet_ctl *packet_put(struct packet_ctl *new_pkt);// put a packet to a buffer +and return a free memory slot for the next packet. +struct packet_ctl *packet_get(uint32_t seq);// get a packet data in buffer by +indicating the packet seq. + + +How to sync between server threads +---------------------------------- + +If children's aaa() operation need to wait the parents's init() to be done, then do it +literally like this: + + UDP Server + TCP Server1 = spawn( )----> TCP Server1 + init() + TCP Server2 = spawn( )-----> TCP Server2 + V(sem)----------------------> P(sem) // No child any more + V(sem)---------------------> P(sem) + aaa() // No need to V(sem), for no child + aaa() + +If parent's send() operation need to wait the children's ready() done, then do it +literally too, but is a reverse way: + + UDP Server TCP Server1 TCP Server2 + // No child any more + ready() ready() + P(sem) <--------------------- V(sem) + P(sem) <------------------ V(sem) + send() + +Note that the aaa() and ready() operations above run in parallel. If this is not the +case due to race condition, the sequence above can be modified into this below: + + UDP Server TCP Server1 TCP Server2 + // No child any more + ready() + P(sem) <--------------------- V(sem) + ready() + P(sem) <------------------- V(sem) + send() + + +In order to implement such chained/zipper sync pattern, a pair of semaphores is +needed between the parent and the child. One is used by child to wait parent , the +other is used by parent to wait child. semaphore pair can be allocated by parent +and pass the pointer to the child over spawn() operation such as pthread_create(). + +/* semaphore pair definition */ +struct semaphores { + sem_t wait_parent; + sem_t wait_child; +}; + +Then the semaphore pair can be recorded by threads by using the semlink struct below: +struct semlink { + struct semaphores *this; /* used by parent to point to the struct semaphores + which it created during spawn child. */ + struct semaphores *parent; /* used by child to point to the struct + semaphores which it created by parent */ +}; + +chained/zipper sync API: + +void sl_wait_child(struct semlink *sl); +void sl_release_child(struct semlink *sl); +void sl_wait_parent(struct semlink *sl); +void sl_release_parent(struct semlink *sl); + +API usage is like this. + +Thread1(root parent) Thread2(child) Thread3(grandchild) +sl_wait_parent(noop op) +sl_release_child + +---------->sl_wait_parent + sl_release_child + +-----------> sl_wait_parent + sl_release_child(noop op) + ... + sl_wait_child(noop op) + + sl_release_parent + sl_wait_child <------------- + + sl_release_parent +sl_wait_child <------------ +sl_release_parent(noop op) + +API implementation: + +void sl_wait_child(struct semlink *sl) +{ + if (sl->this) { + P(sl->this->wait_child); + } +} + +void sl_release_child(struct semlink *sl) +{ + if (sl->this) { + V(sl->this->wait_parent); + } +} + +void sl_wait_parent(struct semlink *sl) +{ + if (sl->parent) { + P(sl->parent->wait_parent); + } +} + +void sl_release_parent(struct semlink *sl) +{ + if (sl->parent) { + V(sl->parent->wait_child); + } +} + +Client flow chart +----------------- +See Collaboration Diagram + +UDP thread flow chart +--------------------- +See Collaboration Diagram + +TCP thread flow chart +--------------------- + + +S_INIT --- (UDP initialized) ---> S_ACCEPT --- (accept clients) --+ + | + /----------------------------------------------------------------/ + V +S_PREP --- (UDP prepared abuffer) + ^ | + | \--> S_SYNC --- (clients ClIENT_READY) + | | + | \--> S_SEND --- (clients CLIENT_DONE) + | | + | V + \---------------(bufferctl.pkt_count != 0)-----------------------+ + | + V + exit() <--- (bufferctl.pkt_count == 0) + + +TCP using poll and message queue +-------------------------------- + +TCP uses poll() to sync with client's events as well as output event from itself, so +that we can use non-block socket operations to reduce the latency. POLLIN means there +are message from client and POLLOUT means we are ready to send message/retransmission +packets to client. + +poll main loop pseudo code: +void check_clients(struct server_status_data *sdata) +{ + poll_events = poll(&(sdata->ds[1]), sdata->ccount - 1, timeout); + + /* check all connected clients */ + for (sdata->cindex = 1; sdata->cindex < sdata->ccount; sdata->cindex++) { + ds = &(sdata->ds[sdata->cindex]); + if (!ds->revents) { + continue; + } + + if (ds->revents & (POLLERR|POLLHUP|POLLNVAL)) { + handle_error_event(sdata); + } else if (ds->revents & (POLLIN|POLLPRI)) { + handle_pullin_event(sdata); // may set POLLOUT into ds->events + // to trigger handle_pullout_event(). + } else if (ds->revents & POLLOUT) { + handle_pullout_event(sdata); + } + } +} + +For TCP, since the message from client may not complete and send data may be also +interrupted due to non-block fashion, there should be one send message queue and a +receive message queue on the server side for each client (client do not use non-block +operations). + +TCP message queue definition: + +struct tcpq { + struct qmsg *head, *tail; + long count; /* message count in a queue */ + long size; /* Total data size of a queue */ +}; + +TCP message queue item definition: + +struct qmsg { + struct qmsg *next; + void *data; + long size; +}; + +TCP message queue API: + +// Allocate and init a queue. +struct tcpq * tcpq_queue_init(void); + +// Free a queue. +void tcpq_queue_free(struct tcpq *q); + +// Return queue length. +long tcpq_queue_dsize(struct tcpq *q); + +// queue new message to tail. +void tcpq_queue_tail(struct tcpq *q, void *data, long size); + +// queue message that cannot be sent currently back to queue head. +void tcpq_queue_head(struct tcpq *q, void *data, long size); + +// get one piece from queue head. +void * tcpq_dequeue_head(struct tcpq *q, long *size); + +// Serialize all pieces of a queue, and move it out of queue, to ease the further +//operation on it. +void * tcpq_dqueue_flat(struct tcpq *q, long *size); + +// Serialize all pieces of a queue, do not move it out of queue, to ease the further +//operation on it. +void * tcpq_queue_flat_peek(struct tcpq *q, long *size); diff --git a/docs/development/requirements/multicast.rst b/docs/development/requirements/multicast.rst new file mode 100644 index 00000000..ba314d3a --- /dev/null +++ b/docs/development/requirements/multicast.rst @@ -0,0 +1,190 @@ +Requirement +=========== +1. When deploying a large OPNFV/OpenStack cluster, we would like to take the advantage of UDP +multicast to prevent the network bottleneck when distributing Kolla container from one +Installer Server to all target hosts by using unicast. + +2. When it comes to auto scaling (extension) of compute nodes, use unicast is acceptable, since +the number of nodes in this condition is usually small. + +The basic step to introduce multicast to deployment is: +a. Still setup the monopolistic docker registry server on Daisy server as a failsafe. +b. Daisy server, as the multicast server, prepares the image file to be transmitted, and count +how many target hosts(as the multicast clients)that should receive the image file +simultaneously. +c. Multicast clients tell the multicast server about ready to receive the image. +d. Multicast server transmits image over UDP multicast channel. +e. Multicast clients report success after received the whole image. +f. Setup docker registry server on each target hosts based upon received docker image. +g. Setup Kolla ansible to use 127.0.0.1 as the registry server IP so that the real docker +container retrieving network activities only take place inside target hosts. + + +Design +====== + +Methods to achieve +------------------ + +TIPC +++++ + +TIPC or its wrapper such as ZeroMQ is good at multicast, but it is not suitable as an +installer: +1. The default TIPC kernel module equipped by CentOS7(kernel verison 3.10) is NOT stable +especially in L3 multicast(although we can use L2 multicast, but the network will be limited to +L2). If errors happen, it is hard for us to recover a node from kernel panic. + +2. TIPC's design is based on a stable node cluster environment, esp in Lossless Ethernet. But +the real environment is generally not in that case. When multicast is broken, Installer should +switch to unicast, but TIPC currently do not have such capability. + +Top level design +---------------- +1. There are two kinds of thread on the server side, one is UDP multicast thread the other is +TCP sync/retransmit thread. There will be more than one TCP threads since one TCP thread can +only serve a limited client (say 64~128) in order to limit the CPU load and unicast retransmit +network usage. + +2. There is only one thread on client side. + +3. All the packets that a client lost during UDP multicast will be request by client to the TCP +thread and resend by using TCP unicast, if unicast still cannot deliver the packets successfully, +the client will failback to using the monopolistic docker registry server on Daisy server as a +failsafe option. + +4. Each packet needs checksum. + + +UDP Server Design (runs on Daisy Server) +---------------------------------------- + +1. Multicast group IP and Port should be configurable, as well as the interface that will be +used as the egress of the multicast packets. The user will pass the interface's IP as the +handle to find the egress. + +2. Image data to be sent is passed to server through stdin. + +3. Consider the size of image is large (xGB), the server cannot pre-allocate whole buffer to +hold all image at once. Besides, since the data is from stdin and the actual length is +unpredictable. So the server should split the data into small size buffers and send to the +clients one by one. Furthermore, buffer shall be divided into packets which size is MTU +including the UDP/IP header. Then the buffer size can be , for example 1024 * MTU including the +UDP/IP header. + +4. After sending one buffer to client the server should stop and get feedback from client to +see if all clients have got all packets in that buffer. If any clients lost any buffer, client +should request the server to resend packets from a more stable way(TCP). + +5. when got the EOF from stdin, server should send a buffer which size is 0 as an EOF signal to +the client to let it know about the end of sending. + + +TCP Server Design (runs on Daisy Server) +---------------------------------------- + +1. All TCP server threads and the only one UDP thread share one process. The UDP thread is the +parent thread, and the first TCP thread is the child, while the second TCP thread is the +grandchild, and so on. Thus, for each TCP thread, there is only one parent and at most one +child. + +2. TCP thread accepts the connect request from client. The number of client is predefined by +server cmdline parameter. Each TCP thread connect with at most ,say 64 clients, if there are +more clients to be connected to, then a child TCP thread is spawned by the parent. + +3. Before UDP thread sending any buffer to client, all TCP threads should send UDP multicast +IP/Port information to their clients beforehand. + +4. During each buffer sending cycle, TCP threads send a special protocol message to tell +clients about the size/id of the buffer and id of each packet in it. After getting +acknowledgements from all clients, TCP threads then signal the UDP thread to start +multicasting buffer over UDP. After multicasting finished, TCP threads notifies clients +multicast is done, and wait acknowledgements from clients again. If clients requests +retransmission, then it is the responsibility of TCP threads to resend packets over unicast. +If no retransmission needed, then clients should signal TCP threads that they are ready for +the next buffer to come. + +5. Repeat step 4 if buffer size is not 0 in the last round, otherwise, TCP server shutdown +connection and exit. + + +Server cmdline usage example +---------------------------- + +./server [port] < kolla_image.tgz + + is used here to specify the multicast egress interface. But which interface will be +used by TCP is leaved to route table to decide. + indicates the number of clients , thus the number of target hosts which +need to receive the image. +[port] is the port that will be used by both UDP and TCP. Default value can be used if user +does not provide it. + + +Client Design(Target Host side) +-------------------------------- + +1. Each target hosts has only one client process. + +2. Client connect to TCP server according to the cmdline parameters right after start up. + +3. After connecting to TCP server, client first read from TCP server the multicast group +information which can be used to create the multicast receive socket then. + +4. During each buffer receiving cycle, the client first read from TCP server the buffer info, +prepare the receive buffer, and acknowledge the TCP server that it is ready to receive. Then, +client receive buffer from the multicast socket until TCP server notifying the end of +multicast. By compare the buffer info and the received packets, the client knows whether to +send the retransmission request or not and whether to wait retransmission packet or not. +After all packets are received from UDP/TCP, the client eventually flush buffer to stdout +and tells the TCP server about ready to receive the next buffer. + +5. Repeat step 4 if buffer size is not 0 in the last round, otherwise, client shutdowns +connection and exit. + +Client cmdline usage example +---------------------------- + +./client [port] > kolla_image.tgz + + is used here to specify the multicast ingress interface. But which interface +will be used by TCP is leaved to route table to decide. + indicates the TCP server IP to be connected to. +[port] is the port that will be used by both connect to TCP server and receive multicast +data. + + +Collaboration diagram among UDP Server, TCP Server(illustrate only one TCP thread) +and Clients: + + +UDP Server TCP Server Client + | | | +init mcast group +init mcast send socket + ----------------------------------> + accept clients + <------------------------connet------------------ + --------------------send mcast group info-------> + <---------------------------------- + state = PREP +do { +read data from stdin +prepare one buffer + -----------------------------------> + state = SYNC + -------------------send buffer info--------------> + <----------------------send ClIENT_READY----------- + <---------------------------------- + state = SEND + + ================================================send buffer over UDP multicast======> + -----------------------------------> + -----------------------send SERVER_SENT-----------> + [<-------------------send CLIENT_REQUEST----------] + [--------------send buffer over TCP unicast------>] + flush buffer to stdout + <-------------------send CLIENT_DONE--------------- + <---------------------------------- + state = PREP +while (buffer.len != 0) diff --git a/docs/installationprocedure/bmdeploy.rst b/docs/installationprocedure/bmdeploy.rst deleted file mode 100644 index 38790290..00000000 --- a/docs/installationprocedure/bmdeploy.rst +++ /dev/null @@ -1,144 +0,0 @@ -.. This work is licensed under a Creative Commons Attribution 4.0 International Licence. -.. http://creativecommons.org/licenses/by/4.0 - -Installation Guide (Bare Metal Deployment) -========================================== - -Nodes Configuration (Bare Metal Deployment) -------------------------------------------- - -The below file is the inventory template of deployment nodes: - -"./deploy/config/bm_environment/zte-baremetal1/deploy.yml" - -You can write your own name/roles reference into it. - - - name -- Host name for deployment node after installation. - - - roles -- Components deployed. CONTROLLER_LB is for Controller, -COMPUTER is for Compute role. Currently only these two role is supported. -The first CONTROLLER_LB is also used for ODL controller. 3 hosts in -inventory will be chosen to setup the Ceph storage cluster. - -**Set TYPE and FLAVOR** - -E.g. - -.. code-block:: yaml - - TYPE: virtual - FLAVOR: cluster - -**Assignment of different roles to servers** - -E.g. OpenStack only deployment roles setting - -.. code-block:: yaml - - hosts: - - name: host1 - roles: - - CONTROLLER_LB - - name: host2 - roles: - - COMPUTER - - name: host3 - roles: - - COMPUTER - -NOTE: -WE JUST SUPPORT ONE CONTROLLER NODE NOW. - -Network Configuration (Bare Metal Deployment) ------------------------------------------- - -Before deployment, there are some network configurations to be checked based -on your network topology. The default network configuration file for Daisy is -"./deploy/config/bm_environment/zte-baremetal1/network.yml". -You can write your own reference into it. - -**The following figure shows the default network configuration.** - -.. code-block:: console - - - +-B/M--------+------------------------------+ - |Jumperserver+ | - +------------+ +--+ | - | | | | - | +-V/M--------+ | | | - | | Daisyserver+------+ | | - | +------------+ | | | - | | | | - +------------------------------------| |---+ - | | - | | - +--+ | | - | | +-B/M--------+ | | - | +-------+ Controller +------+ | - | | | ODL(Opt.) | | | - | | | Network | | | - | | | CephOSD1 | | | - | | +------------+ | | - | | | | - | | | | - | | | | - | | +-B/M--------+ | | - | +-------+ Compute1 +------+ | - | | | CephOSD2 | | | - | | +------------+ | | - | | | | - | | | | - | | | | - | | +-B/M--------+ | | - | +-------+ Compute2 +------+ | - | | | CephOSD3 | | | - | | +------------+ | | - | | | | - | | | | - | | | | - +--+ +--+ - ^ ^ - | | - | | - /---------------------------\ | - | External Network | | - \---------------------------/ | - /-----------------------+---\ - | Installation Network | - | Public/Private API | - | Internet Access | - | Tenant Network | - | Storage Network | - | HeartBeat Network | - \---------------------------/ - - - - -Note: For Flat External networks(which is used by default), a physical interface is needed on each compute node for ODL NetVirt recent versions. -HeartBeat network is selected,and if it is configured in network.yml,the keepalived interface will be the heartbeat interface. - -Start Deployment (Bare Metal Deployment) ----------------------------------------- - -(1) Git clone the latest daisy4nfv code from opnfv: "git clone https://gerrit.opnfv.org/gerrit/daisy" - -(2) Download latest bin file(such as opnfv-2017-06-06_23-00-04.bin) of daisy from http://artifacts.opnfv.org/daisy.html and change the bin file name(such as opnfv-2017-06-06_23-00-04.bin) to opnfv.bin - -(3) Make sure the opnfv.bin file is in daisy4nfv code dir - -(4) Create folder of labs/zte/pod2/daisy/config in daisy4nfv code dir - -(5) Move the ./deploy/config/bm_environment/zte-baremetal1/deploy.yml and ./deploy/config/bm_environment/zte-baremetal1/network.yml to labs/zte/pod2/daisy/config dir. - -(6) Config the bridge in jumperserver,make sure the daisy vm can connect to the targetnode,use the command below: -brctl addbr br7 -brctl addif br7 enp3s0f3(the interface for jumperserver to connect to daisy vm) -ifconfig br7 10.20.7.1 netmask 255.255.255.0 up -service network restart - -(7) Run the script deploy.sh in daisy/ci/deploy/ with command: -sudo ./ci/deploy/deploy.sh -b ../daisy -l zte -p pod2 -s os-nosdn-nofeature-noha - -(8) When deploy successfully,the floating ip of openstack is 10.20.7.11,the login account is "admin" and the password is "keystone" diff --git a/docs/installationprocedure/index.rst b/docs/installationprocedure/index.rst deleted file mode 100644 index 8c5a3da7..00000000 --- a/docs/installationprocedure/index.rst +++ /dev/null @@ -1,18 +0,0 @@ -.. _daisy-installation: - -.. This work is licensed under a Creative Commons Attribution 4.0 International License. -.. http://creativecommons.org/licenses/by/4.0 - -********************************** -OPNFV Daisy4nfv Installation Guide -********************************** - -.. toctree:: - :numbered: - :maxdepth: 4 - - introduction.rst - installation_guide.rst - bmdeploy.rst - vmdeploy.rst - diff --git a/docs/installationprocedure/installation_guide.rst b/docs/installationprocedure/installation_guide.rst deleted file mode 100644 index 5afd73aa..00000000 --- a/docs/installationprocedure/installation_guide.rst +++ /dev/null @@ -1,168 +0,0 @@ -.. This work is licensed under a Creative Commons Attribution 4.0 International License. -.. http://creativecommons.org/licenses/by/4.0 - -Daisy4nfv configuration -======================= - -This document provides guidelines on how to install and configure the Danube -release of OPNFV when using Daisy as a deployment tool including required -software and hardware configurations. - -Installation and configuration of host OS, OpenStack etc. can be supported by -Daisy on Virtual nodes and Bare Metal nodes. - -The audience of this document is assumed to have good knowledge in -networking and Unix/Linux administration. - -Prerequisites -------------- - -Before starting the installation of the Danube release of OPNFV, some plannings -must be done. - - -Retrieve the installation bin image -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -First of all, the installation bin which includes packages of Daisy, OS, -OpenStack, and so on is needed for deploying your OPNFV environment. - -The stable release bin image can be retrieved via `OPNFV software download page `_ - -The daily build bin image can be retrieved via OPNFV artifact repository: - -http://artifacts.opnfv.org/daisy.html - -NOTE: Search the keyword "daisy/Danube" to locate the bin image. - -E.g. -daisy/opnfv-gerrit-27155.bin - -The git url and sha1 of bin image are recorded in properties files. -According to these, the corresponding deployment scripts can be retrieved. - - -Retrieve the deployment scripts -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -To retrieve the repository of Daisy on Jumphost use the following command: - -- git clone https://gerrit.opnfv.org/gerrit/daisy - -To get stable Danube release, you can use the following command: - -- git checkout danube.1.0 - - -Setup Requirements ------------------- - -If you have only 1 Bare Metal server, Virtual deployment is recommended. if you have more -than 3 servers, the Bare Metal deployment is recommended. The minimum number of -servers for each role in Bare metal deployment is listed below. - -+------------+------------------------+ -| **Role** | **Number of Servers** | -| | | -+------------+------------------------+ -| Jump Host | 1 | -| | | -+------------+------------------------+ -| Controller | 1 | -| | | -+------------+------------------------+ -| Compute | 1 | -| | | -+------------+------------------------+ - - -Jumphost Requirements -~~~~~~~~~~~~~~~~~~~~~ - -The Jumphost requirements are outlined below: - -1. CentOS 7.2 (Pre-installed). - -2. Root access. - -3. Libvirt virtualization support(For virtual deployment). - -4. Minimum 1 NIC(or 2 NICs for virtual deployment). - - - PXE installation Network (Receiving PXE request from nodes and providing OS provisioning) - - - IPMI Network (Nodes power control and set boot PXE first via IPMI interface) - - - Internet access (For getting latest OS updates) - - - External Interface(For virtual deployment, exclusively used by instance traffic to access the rest of the Internet) - -5. 16 GB of RAM for a Bare Metal deployment, 64 GB of RAM for a Virtual deployment. - -6. CPU cores: 32, Memory: 64 GB, Hard Disk: 500 GB, (Virtual deployment needs 1 TB Hard Disk) - - -Bare Metal Node Requirements ----------------------------- - -Bare Metal nodes require: - -1. IPMI enabled on OOB interface for power control. - -2. BIOS boot priority should be PXE first then local hard disk. - -3. Minimum 1 NIC for Compute nodes, 2 NICs for Controller nodes. - - - PXE installation Network (Broadcasting PXE request) - - - IPMI Network (Receiving IPMI command from Jumphost) - - - Internet access (For getting latest OS updates) - - - External Interface(For virtual deployment, exclusively used by instance traffic to access the rest of the Internet) - - - - -Network Requirements --------------------- - -Network requirements include: - -1. No DHCP or TFTP server running on networks used by OPNFV. - -2. 2-7 separate networks with connectivity between Jumphost and nodes. - - - PXE installation Network - - - IPMI Network - - - Internet access Network - - - OpenStack Public API Network - - - OpenStack Private API Network - - - OpenStack External Network - - - OpenStack Tenant Network(currently, VxLAN only) - - -3. Lights out OOB network access from Jumphost with IPMI node enabled (Bare Metal deployment only). - -4. Internet access Network has Internet access, meaning a gateway and DNS availability. - -5. OpenStack External Network has Internet access too if you want instances to access the Internet. - -Note: **All networks except OpenStack External Network can share one NIC(Default configuration) or use an exclusive** -**NIC(Reconfigurated in network.yml).** - - -Execution Requirements (Bare Metal Only) ----------------------------------------- - -In order to execute a deployment, one must gather the following information: - -1. IPMI IP addresses of the nodes. - -2. IPMI login information for the nodes (user/password). diff --git a/docs/installationprocedure/introduction.rst b/docs/installationprocedure/introduction.rst deleted file mode 100644 index 4781ab7d..00000000 --- a/docs/installationprocedure/introduction.rst +++ /dev/null @@ -1,23 +0,0 @@ -.. This work is licensed under a Creative Commons Attribution 4.0 International Licence. -.. http://creativecommons.org/licenses/by/4.0 - -Abstract -======== - -This document describes how to install the Danube release of OPNFV when using -Daisy4nfv as a deployment tool covering it's limitations, dependencies and -required resources. - -Version history -=============== - -+--------------------+--------------------+--------------------+---------------------------+ -| **Date** | **Ver.** | **Author** | **Comment** | -| | | | | -+--------------------+--------------------+--------------------+---------------------------+ -| 2017-02-07 | 0.0.1 | Zhijiang Hu | Initial version | -| | | (ZTE) | | -+--------------------+--------------------+--------------------+---------------------------+ - - - diff --git a/docs/installationprocedure/vmdeploy.rst b/docs/installationprocedure/vmdeploy.rst deleted file mode 100644 index 2ed6b001..00000000 --- a/docs/installationprocedure/vmdeploy.rst +++ /dev/null @@ -1,144 +0,0 @@ -.. This work is licensed under a Creative Commons Attribution 4.0 International Licence. -.. http://creativecommons.org/licenses/by/4.0 - -Installation Guide (Virtual Deployment) -======================================= - -Nodes Configuration (Virtual Deployment) ----------------------------------------- - -The below file is the inventory template of deployment nodes: - -"./deploy/conf/vm_environment/zte-virtual1/deploy.yml" - -You can write your own name/roles reference into it. - - - name -- Host name for deployment node after installation. - - - roles -- Components deployed. - -**Set TYPE and FLAVOR** - -E.g. - -.. code-block:: yaml - - TYPE: virtual - FLAVOR: cluster - -**Assignment of different roles to servers** - -E.g. OpenStack only deployment roles setting - -.. code-block:: yaml - - hosts: - - name: host1 - roles: - - controller - - - name: host2 - roles: - - compute - -NOTE: -WE JUST SUPPORT ONE CONTROLLER NODE NOW. - -E.g. OpenStack and ceph deployment roles setting - -.. code-block:: yaml - - hosts: - - name: host1 - roles: - - controller - - - name: host2 - roles: - - compute - -Network Configuration (Virtual Deployment) ------------------------------------------- - -Before deployment, there are some network configurations to be checked based -on your network topology. The default network configuration file for Daisy is -"daisy/deploy/config/vm_environment/zte-virtual1/network.yml". -You can write your own reference into it. - -**The following figure shows the default network configuration.** - -.. code-block:: console - - - +-B/M--------+------------------------------+ - |Jumperserver+ | - +------------+ +--+ | - | | | | - | +-V/M--------+ | | | - | | Daisyserver+------+ | | - | +------------+ | | | - | | | | - | +--+ | | | - | | | +-V/M--------+ | | | - | | +-------+ Controller +------+ | | - | | | | ODL(Opt.) | | | | - | | | | Network | | | | - | | | | Ceph1 | | | | - | | | +------------+ | | | - | | | | | | - | | | | | | - | | | | | | - | | | +-V/M--------+ | | | - | | +-------+ Compute1 +------+ | | - | | | | Ceph2 | | | | - | | | +------------+ | | | - | | | | | | - | | | | | | - | | | | | | - | | | +-V/M--------+ | | | - | | +-------+ Compute2 +------+ | | - | | | | Ceph3 | | | | - | | | +------------+ | | | - | | | | | | - | | | | | | - | | | | | | - | +--+ +--+ | - | ^ ^ | - | | | | - | | | | - | /---------------------------\ | | - | | External Network | | | - | \---------------------------/ | | - | /-----------------------+---\ | - | | Installation Network | | - | | Public/Private API | | - | | Internet Access | | - | | Tenant Network | | - | | Storage Network | | - | | HeartBeat Network | | - | \---------------------------/ | - +-------------------------------------------+ - - - -Note: For Flat External networks(which is used by default), a physical interface is needed on each compute node for ODL NetVirt recent versions. -HeartBeat network is selected,and if it is configured in network.yml,the keepalived interface will be the heartbeat interface. - -Start Deployment (Virtual Deployment) -------------------------------------- - -(1) Git clone the latest daisy4nfv code from opnfv: "git clone https://gerrit.opnfv.org/gerrit/daisy" - -(2) Download latest bin file(such as opnfv-2017-06-06_23-00-04.bin) of daisy from http://artifacts.opnfv.org/daisy.html and change the bin file name(such as opnfv-2017-06-06_23-00-04.bin) to opnfv.bin - -(3) Make sure the opnfv.bin file is in daisy4nfv code dir - -(4) Create folder of labs/zte/virtual1/daisy/config in daisy4nfv code dir - -(5) Move the daisy/deploy/config/vm_environment/zte-virtual1/deploy.yml and daisy/deploy/config/vm_environment/zte-virtual1/network.yml to labs/zte/virtual1/daisy/config dir. -Notes:zte-virtual1 config file is just for all-in-one deployment,if you want to deploy openstack with five node(1 lb node and 4 computer nodes),change the zte-virtual1 to zte-virtual2 - -(6) Run the script deploy.sh in daisy/ci/deploy/ with command: -sudo ./ci/deploy/deploy.sh -b ../daisy -l zte -p virtual1 -s os-nosdn-nofeature-noha - -(7) When deploy successfully,the floating ip of openstack is 10.20.11.11,the login account is "admin" and the password is "keystone" diff --git a/docs/release/configguide/index.rst b/docs/release/configguide/index.rst new file mode 100644 index 00000000..7b531f45 --- /dev/null +++ b/docs/release/configguide/index.rst @@ -0,0 +1,16 @@ +.. This document is protected/licensed under the following conditions +.. (c) Sun Jing (ZTE corporation) +.. Licensed under a Creative Commons Attribution 4.0 International License. +.. You should have received a copy of the license along with this work. +.. If not, see . + +***************************** +Release notes for Daisy4nfv +***************************** + +.. toctree:: + :numbered: + :maxdepth: 2 + + installerconfig.rst + diff --git a/docs/release/configguide/installerconfig.rst b/docs/release/configguide/installerconfig.rst new file mode 100644 index 00000000..795f6028 --- /dev/null +++ b/docs/release/configguide/installerconfig.rst @@ -0,0 +1,15 @@ + +.. This document is protected/licensed under the following conditions +.. (c) Sun Jing (ZTE corporation) +.. Licensed under a Creative Commons Attribution 4.0 International License. +.. You should have received a copy of the license along with this work. +.. If not, see . + + +======== +Abstract +======== + +This document compiles the release notes for the D 2.0 release of +OPNFV when using Daisy4nfv as a deployment tool. + diff --git a/docs/release/installation/bmdeploy.rst b/docs/release/installation/bmdeploy.rst new file mode 100644 index 00000000..47a8e121 --- /dev/null +++ b/docs/release/installation/bmdeploy.rst @@ -0,0 +1,150 @@ +.. This work is licensed under a Creative Commons Attribution 4.0 International Licence. +.. http://creativecommons.org/licenses/by/4.0 + +Installation Guide (Bare Metal Deployment) +========================================== + +Nodes Configuration (Bare Metal Deployment) +------------------------------------------- + +The below file is the inventory template of deployment nodes: + +"./deploy/config/bm_environment/zte-baremetal1/deploy.yml" + +You can write your own name/roles reference into it. + + - name -- Host name for deployment node after installation. + + - roles -- Components deployed. CONTROLLER_LB is for Controller, +COMPUTER is for Compute role. Currently only these two role is supported. +The first CONTROLLER_LB is also used for ODL controller. 3 hosts in +inventory will be chosen to setup the Ceph storage cluster. + +**Set TYPE and FLAVOR** + +E.g. + +.. code-block:: yaml + + TYPE: virtual + FLAVOR: cluster + +**Assignment of different roles to servers** + +E.g. OpenStack only deployment roles setting + +.. code-block:: yaml + + hosts: + - name: host1 + roles: + - CONTROLLER_LB + - name: host2 + roles: + - COMPUTER + - name: host3 + roles: + - COMPUTER + + +NOTE: +For B/M, Daisy uses MAC address defined in deploy.yml to map discovered nodes to node items definition in deploy.yml, then assign role described by node item to the discovered nodes by name pattern. Currently, controller01, controller02, and controller03 will be assigned with Controler role while computer01, 'computer02, computer03, and computer04 will be assigned with Compute role. + +NOTE: +For V/M, There is no MAC address defined in deploy.yml for each virtual machine. Instead, Daisy will fill that blank by getting MAC from "virsh dump-xml". + + +Network Configuration (Bare Metal Deployment) +------------------------------------------ + +Before deployment, there are some network configurations to be checked based +on your network topology. The default network configuration file for Daisy is +"./deploy/config/bm_environment/zte-baremetal1/network.yml". +You can write your own reference into it. + +**The following figure shows the default network configuration.** + +.. code-block:: console + + + +-B/M--------+------------------------------+ + |Jumperserver+ | + +------------+ +--+ | + | | | | + | +-V/M--------+ | | | + | | Daisyserver+------+ | | + | +------------+ | | | + | | | | + +------------------------------------| |---+ + | | + | | + +--+ | | + | | +-B/M--------+ | | + | +-------+ Controller +------+ | + | | | ODL(Opt.) | | | + | | | Network | | | + | | | CephOSD1 | | | + | | +------------+ | | + | | | | + | | | | + | | | | + | | +-B/M--------+ | | + | +-------+ Compute1 +------+ | + | | | CephOSD2 | | | + | | +------------+ | | + | | | | + | | | | + | | | | + | | +-B/M--------+ | | + | +-------+ Compute2 +------+ | + | | | CephOSD3 | | | + | | +------------+ | | + | | | | + | | | | + | | | | + +--+ +--+ + ^ ^ + | | + | | + /---------------------------\ | + | External Network | | + \---------------------------/ | + /-----------------------+---\ + | Installation Network | + | Public/Private API | + | Internet Access | + | Tenant Network | + | Storage Network | + | HeartBeat Network | + \---------------------------/ + + + + +Note: +For Flat External networks(which is used by default), a physical interface is needed on each compute node for ODL NetVirt recent versions. +HeartBeat network is selected,and if it is configured in network.yml,the keepalived interface will be the heartbeat interface. + +Start Deployment (Bare Metal Deployment) +---------------------------------------- + +(1) Git clone the latest daisy4nfv code from opnfv: "git clone https://gerrit.opnfv.org/gerrit/daisy" + +(2) Download latest bin file(such as opnfv-2017-06-06_23-00-04.bin) of daisy from http://artifacts.opnfv.org/daisy.html and change the bin file name(such as opnfv-2017-06-06_23-00-04.bin) to opnfv.bin + +(3) Make sure the opnfv.bin file is in daisy4nfv code dir + +(4) Create folder of labs/zte/pod2/daisy/config in daisy4nfv code dir + +(5) Move the ./deploy/config/bm_environment/zte-baremetal1/deploy.yml and ./deploy/config/bm_environment/zte-baremetal1/network.yml to labs/zte/pod2/daisy/config dir. + +(6) Config the bridge in jumperserver,make sure the daisy vm can connect to the targetnode,use the command below: +brctl addbr br7 +brctl addif br7 enp3s0f3(the interface for jumperserver to connect to daisy vm) +ifconfig br7 10.20.7.1 netmask 255.255.255.0 up +service network restart + +(7) Run the script deploy.sh in daisy/ci/deploy/ with command: +sudo ./ci/deploy/deploy.sh -b ../daisy -l zte -p pod2 -s os-nosdn-nofeature-noha + +(8) When deploy successfully,the floating ip of openstack is 10.20.7.11,the login account is "admin" and the password is "keystone" diff --git a/docs/release/installation/index.rst b/docs/release/installation/index.rst new file mode 100644 index 00000000..8c5a3da7 --- /dev/null +++ b/docs/release/installation/index.rst @@ -0,0 +1,18 @@ +.. _daisy-installation: + +.. This work is licensed under a Creative Commons Attribution 4.0 International License. +.. http://creativecommons.org/licenses/by/4.0 + +********************************** +OPNFV Daisy4nfv Installation Guide +********************************** + +.. toctree:: + :numbered: + :maxdepth: 4 + + introduction.rst + installation_guide.rst + bmdeploy.rst + vmdeploy.rst + diff --git a/docs/release/installation/installation_guide.rst b/docs/release/installation/installation_guide.rst new file mode 100644 index 00000000..5afd73aa --- /dev/null +++ b/docs/release/installation/installation_guide.rst @@ -0,0 +1,168 @@ +.. This work is licensed under a Creative Commons Attribution 4.0 International License. +.. http://creativecommons.org/licenses/by/4.0 + +Daisy4nfv configuration +======================= + +This document provides guidelines on how to install and configure the Danube +release of OPNFV when using Daisy as a deployment tool including required +software and hardware configurations. + +Installation and configuration of host OS, OpenStack etc. can be supported by +Daisy on Virtual nodes and Bare Metal nodes. + +The audience of this document is assumed to have good knowledge in +networking and Unix/Linux administration. + +Prerequisites +------------- + +Before starting the installation of the Danube release of OPNFV, some plannings +must be done. + + +Retrieve the installation bin image +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +First of all, the installation bin which includes packages of Daisy, OS, +OpenStack, and so on is needed for deploying your OPNFV environment. + +The stable release bin image can be retrieved via `OPNFV software download page `_ + +The daily build bin image can be retrieved via OPNFV artifact repository: + +http://artifacts.opnfv.org/daisy.html + +NOTE: Search the keyword "daisy/Danube" to locate the bin image. + +E.g. +daisy/opnfv-gerrit-27155.bin + +The git url and sha1 of bin image are recorded in properties files. +According to these, the corresponding deployment scripts can be retrieved. + + +Retrieve the deployment scripts +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +To retrieve the repository of Daisy on Jumphost use the following command: + +- git clone https://gerrit.opnfv.org/gerrit/daisy + +To get stable Danube release, you can use the following command: + +- git checkout danube.1.0 + + +Setup Requirements +------------------ + +If you have only 1 Bare Metal server, Virtual deployment is recommended. if you have more +than 3 servers, the Bare Metal deployment is recommended. The minimum number of +servers for each role in Bare metal deployment is listed below. + ++------------+------------------------+ +| **Role** | **Number of Servers** | +| | | ++------------+------------------------+ +| Jump Host | 1 | +| | | ++------------+------------------------+ +| Controller | 1 | +| | | ++------------+------------------------+ +| Compute | 1 | +| | | ++------------+------------------------+ + + +Jumphost Requirements +~~~~~~~~~~~~~~~~~~~~~ + +The Jumphost requirements are outlined below: + +1. CentOS 7.2 (Pre-installed). + +2. Root access. + +3. Libvirt virtualization support(For virtual deployment). + +4. Minimum 1 NIC(or 2 NICs for virtual deployment). + + - PXE installation Network (Receiving PXE request from nodes and providing OS provisioning) + + - IPMI Network (Nodes power control and set boot PXE first via IPMI interface) + + - Internet access (For getting latest OS updates) + + - External Interface(For virtual deployment, exclusively used by instance traffic to access the rest of the Internet) + +5. 16 GB of RAM for a Bare Metal deployment, 64 GB of RAM for a Virtual deployment. + +6. CPU cores: 32, Memory: 64 GB, Hard Disk: 500 GB, (Virtual deployment needs 1 TB Hard Disk) + + +Bare Metal Node Requirements +---------------------------- + +Bare Metal nodes require: + +1. IPMI enabled on OOB interface for power control. + +2. BIOS boot priority should be PXE first then local hard disk. + +3. Minimum 1 NIC for Compute nodes, 2 NICs for Controller nodes. + + - PXE installation Network (Broadcasting PXE request) + + - IPMI Network (Receiving IPMI command from Jumphost) + + - Internet access (For getting latest OS updates) + + - External Interface(For virtual deployment, exclusively used by instance traffic to access the rest of the Internet) + + + + +Network Requirements +-------------------- + +Network requirements include: + +1. No DHCP or TFTP server running on networks used by OPNFV. + +2. 2-7 separate networks with connectivity between Jumphost and nodes. + + - PXE installation Network + + - IPMI Network + + - Internet access Network + + - OpenStack Public API Network + + - OpenStack Private API Network + + - OpenStack External Network + + - OpenStack Tenant Network(currently, VxLAN only) + + +3. Lights out OOB network access from Jumphost with IPMI node enabled (Bare Metal deployment only). + +4. Internet access Network has Internet access, meaning a gateway and DNS availability. + +5. OpenStack External Network has Internet access too if you want instances to access the Internet. + +Note: **All networks except OpenStack External Network can share one NIC(Default configuration) or use an exclusive** +**NIC(Reconfigurated in network.yml).** + + +Execution Requirements (Bare Metal Only) +---------------------------------------- + +In order to execute a deployment, one must gather the following information: + +1. IPMI IP addresses of the nodes. + +2. IPMI login information for the nodes (user/password). diff --git a/docs/release/installation/introduction.rst b/docs/release/installation/introduction.rst new file mode 100644 index 00000000..4781ab7d --- /dev/null +++ b/docs/release/installation/introduction.rst @@ -0,0 +1,23 @@ +.. This work is licensed under a Creative Commons Attribution 4.0 International Licence. +.. http://creativecommons.org/licenses/by/4.0 + +Abstract +======== + +This document describes how to install the Danube release of OPNFV when using +Daisy4nfv as a deployment tool covering it's limitations, dependencies and +required resources. + +Version history +=============== + ++--------------------+--------------------+--------------------+---------------------------+ +| **Date** | **Ver.** | **Author** | **Comment** | +| | | | | ++--------------------+--------------------+--------------------+---------------------------+ +| 2017-02-07 | 0.0.1 | Zhijiang Hu | Initial version | +| | | (ZTE) | | ++--------------------+--------------------+--------------------+---------------------------+ + + + diff --git a/docs/release/installation/vmdeploy.rst b/docs/release/installation/vmdeploy.rst new file mode 100644 index 00000000..3812a40e --- /dev/null +++ b/docs/release/installation/vmdeploy.rst @@ -0,0 +1,150 @@ +.. This work is licensed under a Creative Commons Attribution 4.0 International Licence. +.. http://creativecommons.org/licenses/by/4.0 + +Installation Guide (Virtual Deployment) +======================================= + +Nodes Configuration (Virtual Deployment) +---------------------------------------- + +The below file is the inventory template of deployment nodes: + +"./deploy/conf/vm_environment/zte-virtual1/deploy.yml" + +You can write your own name/roles reference into it. + + - name -- Host name for deployment node after installation. + + - roles -- Components deployed. + +**Set TYPE and FLAVOR** + +E.g. + +.. code-block:: yaml + + TYPE: virtual + FLAVOR: cluster + +**Assignment of different roles to servers** + +E.g. OpenStack only deployment roles setting + +.. code-block:: yaml + + hosts: + - name: host1 + roles: + - controller + + - name: host2 + roles: + - compute + +NOTE: +For B/M, Daisy uses MAC address defined in deploy.yml to map discovered nodes to node items definition in deploy.yml, then assign role described by node item to the discovered nodes by name pattern. Currently, controller01, controller02, and controller03 will be assigned with Controler role while computer01, 'computer02, computer03, and computer04 will be assigned with Compute role. + +NOTE: +For V/M, There is no MAC address defined in deploy.yml for each virtual machine. Instead, Daisy will fill that blank by getting MAC from "virsh dump-xml". + +E.g. OpenStack and ceph deployment roles setting + +.. code-block:: yaml + + hosts: + - name: host1 + roles: + - controller + + - name: host2 + roles: + - compute + +Network Configuration (Virtual Deployment) +------------------------------------------ + +Before deployment, there are some network configurations to be checked based +on your network topology. The default network configuration file for Daisy is +"daisy/deploy/config/vm_environment/zte-virtual1/network.yml". +You can write your own reference into it. + +**The following figure shows the default network configuration.** + +.. code-block:: console + + + +-B/M--------+------------------------------+ + |Jumperserver+ | + +------------+ +--+ | + | | | | + | +-V/M--------+ | | | + | | Daisyserver+------+ | | + | +------------+ | | | + | | | | + | +--+ | | | + | | | +-V/M--------+ | | | + | | +-------+ Controller +------+ | | + | | | | ODL(Opt.) | | | | + | | | | Network | | | | + | | | | Ceph1 | | | | + | | | +------------+ | | | + | | | | | | + | | | | | | + | | | | | | + | | | +-V/M--------+ | | | + | | +-------+ Compute1 +------+ | | + | | | | Ceph2 | | | | + | | | +------------+ | | | + | | | | | | + | | | | | | + | | | | | | + | | | +-V/M--------+ | | | + | | +-------+ Compute2 +------+ | | + | | | | Ceph3 | | | | + | | | +------------+ | | | + | | | | | | + | | | | | | + | | | | | | + | +--+ +--+ | + | ^ ^ | + | | | | + | | | | + | /---------------------------\ | | + | | External Network | | | + | \---------------------------/ | | + | /-----------------------+---\ | + | | Installation Network | | + | | Public/Private API | | + | | Internet Access | | + | | Tenant Network | | + | | Storage Network | | + | | HeartBeat Network | | + | \---------------------------/ | + +-------------------------------------------+ + + + +Note: +For Flat External networks(which is used by default), a physical interface is needed on each compute node for ODL NetVirt recent versions. +HeartBeat network is selected,and if it is configured in network.yml,the keepalived interface will be the heartbeat interface. + +Start Deployment (Virtual Deployment) +------------------------------------- + +(1) Git clone the latest daisy4nfv code from opnfv: "git clone https://gerrit.opnfv.org/gerrit/daisy" + +(2) Download latest bin file(such as opnfv-2017-06-06_23-00-04.bin) of daisy from http://artifacts.opnfv.org/daisy.html and change the bin file name(such as opnfv-2017-06-06_23-00-04.bin) to opnfv.bin + +(3) Make sure the opnfv.bin file is in daisy4nfv code dir + +(4) Create folder of labs/zte/virtual1/daisy/config in daisy4nfv code dir + +(5) Move the daisy/deploy/config/vm_environment/zte-virtual1/deploy.yml and daisy/deploy/config/vm_environment/zte-virtual1/network.yml to labs/zte/virtual1/daisy/config dir. + +Note: +zte-virtual1 config file is just for all-in-one deployment,if you want to deploy openstack with five node(1 lb node and 4 computer nodes),change the zte-virtual1 to zte-virtual2 + +(6) Run the script deploy.sh in daisy/ci/deploy/ with command: +sudo ./ci/deploy/deploy.sh -b ../daisy -l zte -p virtual1 -s os-nosdn-nofeature-noha + +(7) When deploy successfully,the floating ip of openstack is 10.20.11.11,the login account is "admin" and the password is "keystone" diff --git a/docs/release/release-notes/index.rst b/docs/release/release-notes/index.rst new file mode 100644 index 00000000..0da52b5f --- /dev/null +++ b/docs/release/release-notes/index.rst @@ -0,0 +1,18 @@ +.. _daisy-releasenotes: + +.. This document is protected/licensed under the following conditions +.. (c) Sun Jing (ZTE corporation) +.. Licensed under a Creative Commons Attribution 4.0 International License. +.. You should have received a copy of the license along with this work. +.. If not, see . + +*************************** +Release notes for Daisy4nfv +*************************** + +.. toctree:: + :numbered: + :maxdepth: 2 + + release-notes.rst + diff --git a/docs/release/release-notes/release-notes.rst b/docs/release/release-notes/release-notes.rst new file mode 100644 index 00000000..629d05de --- /dev/null +++ b/docs/release/release-notes/release-notes.rst @@ -0,0 +1,140 @@ + +.. This document is protected/licensed under the following conditions +.. (c) Sun Jing (ZTE corporation) +.. Licensed under a Creative Commons Attribution 4.0 International License. +.. You should have received a copy of the license along with this work. +.. If not, see . + + +======== +Abstract +======== + +This document covers features, limitations and required system resources of +OPNFV E 1.0 release when using Daisy4nfv as a deployment tool. + +Introduction +============ + +Daisy4nfv is an OPNFV installer project based on open source project Daisycloud-core, +which provides containerized deployment and management of OpenStack and other distributed systems such as OpenDaylight. + +Release Data +============ + ++--------------------------------------+--------------------------------------+ +| **Project** | Daisy4nfv | +| | | ++--------------------------------------+--------------------------------------+ +| **Repo/tag** | Daisy4nfv/Euphrates.1.0 | +| | | ++--------------------------------------+--------------------------------------+ +| **Release designation** | Euphrates.1.0 | +| | | ++--------------------------------------+--------------------------------------+ +| **Release date** | | +| | | ++--------------------------------------+--------------------------------------+ +| **Purpose of the delivery** | OPNFV Euphrates release | +| | | ++--------------------------------------+--------------------------------------+ + +Deliverables +------------ + +Software deliverables +~~~~~~~~~~~~~~~~~~~~~ + + - Daisy4nfv/Euphrates.1.0 ISO, please get it from `OPNFV software download page `_ + +.. _document-label: + +Documentation deliverables +~~~~~~~~~~~~~~~~~~~~~~~~~~ + + - OPNFV(Danube) Daisy4nfv installation instructions + + - OPNFV(Danube) Daisy4nfv Release Notes + +Version change +-------------- +.. This section describes the changes made since the last version of this document. + +Module version change +~~~~~~~~~~~~~~~~~~~~~ + +This is the Euphrates release of Daisy4nfv as a deployment toolchain in OPNFV, the following +upstream components supported with this release. + + - Centos 7.3 + + - Openstack (Ocata release) + + - Opendaylight (Carbon release) + +Reason for new version +---------------------- + +Feature additions +~~~~~~~~~~~~~~~~~ + ++--------------------------------------+-----------------------------------------+ +| **JIRA REFERENCE** | **SLOGAN** | +| | | ++--------------------------------------+-----------------------------------------+ +| | Support OpenDayLight Carbon | +| | | ++--------------------------------------+-----------------------------------------+ +| | Support OpenStack Ocata | +| | | ++--------------------------------------+-----------------------------------------+ + + + +Bug corrections +~~~~~~~~~~~~~~~ + +**JIRA TICKETS:** + ++--------------------------------------+--------------------------------------+ +| **JIRA REFERENCE** | **SLOGAN** | +| | | ++--------------------------------------+--------------------------------------+ +| | | +| | | ++--------------------------------------+--------------------------------------+ + + +Known Limitations, Issues and Workarounds +========================================= + +System Limitations +------------------ + +**Max number of blades:** 1 Jumphost, 3 Controllers, 20 Compute blades + +**Min number of blades:** 1 Jumphost, 1 Controller, 1 Compute blade + +**Storage:** Ceph is the only supported storage configuration + +**Min Jumphost requirements:** At least 16GB of RAM, 16 core CPU + +Known issues +------------ + ++----------------------+-------------------------------+-----------------------+ +| **Scenario** | **Issue** | **Workarounds** | ++----------------------+-------------------------------+-----------------------+ +| | | | +| | | | +| | | | ++----------------------+-------------------------------+-----------------------+ +| All HA scenario | Occasionally lose VIP | Failed in testcase, | +| | | normal in usage | ++----------------------+-------------------------------+-----------------------+ + + +Test Result +=========== +TODO + diff --git a/docs/releasenotes/index.rst b/docs/releasenotes/index.rst deleted file mode 100644 index 0da52b5f..00000000 --- a/docs/releasenotes/index.rst +++ /dev/null @@ -1,18 +0,0 @@ -.. _daisy-releasenotes: - -.. This document is protected/licensed under the following conditions -.. (c) Sun Jing (ZTE corporation) -.. Licensed under a Creative Commons Attribution 4.0 International License. -.. You should have received a copy of the license along with this work. -.. If not, see . - -*************************** -Release notes for Daisy4nfv -*************************** - -.. toctree:: - :numbered: - :maxdepth: 2 - - release-notes.rst - diff --git a/docs/releasenotes/release-notes.rst b/docs/releasenotes/release-notes.rst deleted file mode 100644 index 629d05de..00000000 --- a/docs/releasenotes/release-notes.rst +++ /dev/null @@ -1,140 +0,0 @@ - -.. This document is protected/licensed under the following conditions -.. (c) Sun Jing (ZTE corporation) -.. Licensed under a Creative Commons Attribution 4.0 International License. -.. You should have received a copy of the license along with this work. -.. If not, see . - - -======== -Abstract -======== - -This document covers features, limitations and required system resources of -OPNFV E 1.0 release when using Daisy4nfv as a deployment tool. - -Introduction -============ - -Daisy4nfv is an OPNFV installer project based on open source project Daisycloud-core, -which provides containerized deployment and management of OpenStack and other distributed systems such as OpenDaylight. - -Release Data -============ - -+--------------------------------------+--------------------------------------+ -| **Project** | Daisy4nfv | -| | | -+--------------------------------------+--------------------------------------+ -| **Repo/tag** | Daisy4nfv/Euphrates.1.0 | -| | | -+--------------------------------------+--------------------------------------+ -| **Release designation** | Euphrates.1.0 | -| | | -+--------------------------------------+--------------------------------------+ -| **Release date** | | -| | | -+--------------------------------------+--------------------------------------+ -| **Purpose of the delivery** | OPNFV Euphrates release | -| | | -+--------------------------------------+--------------------------------------+ - -Deliverables ------------- - -Software deliverables -~~~~~~~~~~~~~~~~~~~~~ - - - Daisy4nfv/Euphrates.1.0 ISO, please get it from `OPNFV software download page `_ - -.. _document-label: - -Documentation deliverables -~~~~~~~~~~~~~~~~~~~~~~~~~~ - - - OPNFV(Danube) Daisy4nfv installation instructions - - - OPNFV(Danube) Daisy4nfv Release Notes - -Version change --------------- -.. This section describes the changes made since the last version of this document. - -Module version change -~~~~~~~~~~~~~~~~~~~~~ - -This is the Euphrates release of Daisy4nfv as a deployment toolchain in OPNFV, the following -upstream components supported with this release. - - - Centos 7.3 - - - Openstack (Ocata release) - - - Opendaylight (Carbon release) - -Reason for new version ----------------------- - -Feature additions -~~~~~~~~~~~~~~~~~ - -+--------------------------------------+-----------------------------------------+ -| **JIRA REFERENCE** | **SLOGAN** | -| | | -+--------------------------------------+-----------------------------------------+ -| | Support OpenDayLight Carbon | -| | | -+--------------------------------------+-----------------------------------------+ -| | Support OpenStack Ocata | -| | | -+--------------------------------------+-----------------------------------------+ - - - -Bug corrections -~~~~~~~~~~~~~~~ - -**JIRA TICKETS:** - -+--------------------------------------+--------------------------------------+ -| **JIRA REFERENCE** | **SLOGAN** | -| | | -+--------------------------------------+--------------------------------------+ -| | | -| | | -+--------------------------------------+--------------------------------------+ - - -Known Limitations, Issues and Workarounds -========================================= - -System Limitations ------------------- - -**Max number of blades:** 1 Jumphost, 3 Controllers, 20 Compute blades - -**Min number of blades:** 1 Jumphost, 1 Controller, 1 Compute blade - -**Storage:** Ceph is the only supported storage configuration - -**Min Jumphost requirements:** At least 16GB of RAM, 16 core CPU - -Known issues ------------- - -+----------------------+-------------------------------+-----------------------+ -| **Scenario** | **Issue** | **Workarounds** | -+----------------------+-------------------------------+-----------------------+ -| | | | -| | | | -| | | | -+----------------------+-------------------------------+-----------------------+ -| All HA scenario | Occasionally lose VIP | Failed in testcase, | -| | | normal in usage | -+----------------------+-------------------------------+-----------------------+ - - -Test Result -=========== -TODO - -- cgit