본문 바로가기

Cloud Native/Research_OpenStack

OpenStack SFC manual Install (Liberty)

OpenStack Liberty SFC Install (not devstack)


최근 NFV (Network Function Virtualization)이 각광을 받으면서 OpenStack이 더욱 뜨겁게 달아오르고 있다.

아무래도 Telco에서 적극적으로 NFV 환경을 뒤에서 밀어주기 때문일지 모르겠으나 OpenStack 기반으로 NFV 환경을 구축하고자 하는 사례들이 여기저기서 보이고 있다.


그 이유를 찾아보자면 여러가지가 있겠으나 가장 큰 이유는 가상 네트워크 및 NFV의 핵심이 될 SF (Service Function)을 쉽게  구성할 수 있으며 이를 활용하는 SFC (Service Function Chaining) 서비스를 제공할 수 있기 때문일 것으로 추측한다.


NFV 환경을 구성하는 기반 기술은 아무래도 네트워크 가상화와 함께 가상 네트워크 상에서 서비스를 유연하게 연결하는 것이 될 것이다.


이미 OpenStack을 활용하여 가상 네트워크 및 SF VM을 구성하고 SDN Controller (ODL/ONOS)를 활용하는 서비스는 PoC 단계로 구현되고 있다.



[그림1] OpenStack 기반 Service Function Chaining Flow



위 대표도에서 굉장히 광범위하게 설명하자면 SF들이 VM 형태로 생성될 경우 VM간의 통신에서 패킷의 흐름을 그 목적에 맞도록 연결하는 것인데 SF VM 이라고 함은 '네트워크 서비스가 포함된 가상 머신'으로 간주할 수 있다.


즉, 소스 SF01에서 목적지 SF05를 향하는 트래픽이 있다고 가정하는 경우 해당 트래픽은 Firewall 기능을 담당하는 SF02를 거쳐야 하고 Loadbalancing을 담당하는 SF04를 거쳐 목적지를 향하게 하는 것 이다.


OpenStack에서는 networking-sfc 라는 프로젝트로 시작이 되고 있으며 아래 내용은 networking-sfc wiki 페이지를 참조한 implementation 방법이다.


참조사이트

- https://wiki.openstack.org/wiki/Neutron/ServiceInsertionAndChaining

- http://docs.openstack.org/developer/networking-sfc/

- https://github.com/openstack/networking-sfc



Special Thanks to : 아래 내용 작성에 도움을 주신 KT 정치욱님께 감사의 말씀을 드립니다.



OpenStack 기반 networking-sfc Basic Setup




1. 기본 설정


1-1. Environment

  • OpenStack Liberty Documentation (http://docs.openstack.org/liberty/install-guide-ubuntu/)
  • ubuntu 14.0.4 이미지 활용
  • 3 Server Node
    • Controller Node / Compute01 / Compute02

[그림2] OpenStack Environment

  • NIC Interface
    • eth0 (Controller Node의 경우 br-ex와 바인딩) / eth1 (Management) / eth2 (Data)

###### Controller Node  ######


$ sudo vi /etc/network/interfaces


# OVS br-ex bind eth0

auto br-ex

iface br-ex inet static

      address <public_ip>

      gateway <public_gateway_IP>

      netmask 255.255.255.0

      dns-nameservers 8.8.8.8


# Public Network

auto eth0

iface eth0 inet manual

       up ip link set dev $IFACE up

       down ip link set dev $IFACE down


# Management Network

auto eth1

iface eth1 inet static

      address 192.168.56.10

      netmask 255.255.255.0


# Data Network

auto eth2

iface eth2 inet static

      address 172.168.56.10

      netmask 255.255.255.0

################################


###### Compute Node 01 ######


$ sudo vi /etc/network/interfaces


# Public Network

auto eth0

iface eth0 inet static

      address <public_ip>

      gateway <public_gateway_IP>

      netmask 255.255.255.0

      dns-nameservers 8.8.8.8


# Management Network

auto eth1

iface eth1 inet static

      address 192.168.56.20

      netmask 255.255.255.0



# Data Network

auto eth2

iface eth2 inet static

      address 172.168.56.20

      netmask 255.255.255.0

################################


###### Compute Node 02 ######


$ sudo vi /etc/network/interfaces


# Public Network

auto eth0

iface eth0 inet static

      address <public_ip>

      gateway <public_gateway_IP>

      netmask 255.255.255.0

      dns-nameservers 8.8.8.8


# Management Network

auto eth1

iface eth1 inet static

      address 192.168.56.30

      netmask 255.255.255.0



# Data Network

auto eth2

iface eth2 inet static

      address 172.168.56.30

      netmask 255.255.255.0

################################


1-2. 업데이트 & 업그레이드 / 필수 프로그램 설치


$ sudo apt-get update

$ sudo apt-get -y upgrade


1-3. OpenStack liberty Repository 설정


sudo apt-get install software-properties-common

$ sudo add-apt-repository cloud-archive:liberty



2. networking-sfc setup


2-1. networking-sfc download (All Nodes)


sudo apt-get install -y git

$ git clone git://git.openstack.org/openstack/networking-sfc.git -b stable/liberty

$ sudo pip install -e /home/{user}/networking-sfc

$ sudo su

$ . admin-openrc.sh

$ neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini --subproject networking-sfc upgrade head

$ exit

$ cd /networing-sfc

$ sudo python setup.py install

$ sudo cp /usr/local/bin/neutron-openvswitch-agent /usr/bin/neutron-openvswitch-agent




3. neutron configuration


3-1. Controller Node (Controller+Network)


sudo vi /etc/nova/nova.conf


[DEFAULT]

dhcpbridge_flagfile=/etc/nova/nova.conf

dhcpbridge=/usr/bin/nova-dhcpbridge

logdir=/var/log/nova

state_path=/var/lib/nova

lock_path=/var/lock/nova

force_dhcp_release=True

libvirt_use_virtio_for_bridges=True

verbose=True

ec2_private_dns_show_ip=True

api_paste_config=/etc/nova/api-paste.ini

enabled_apis=osapi_compute,metadata

 

rpc_backend = rabbit

 

auth_strategy = keystone

 

my_ip =  <Controller Node eth1 IP>

 

network_api_class = nova.network.neutronv2.api.API

security_group_api = neutron

linuxnet_interface_driver = nova.network.linux_net.LinuxOVSBridgeInterfaceDriver

firewall_driver = nova.virt.firewall.NoopFirewallDriver

 

[keystone_authtoken]

auth_uri = http://controller:5000

auth_url = http://controller:35357

auth_plugin = password

project_domain_id = default

user_domain_id = default

project_name = service

username = nova

password = NOVA_PASS

 

[database]

connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova

 

[oslo_messaging_rabbit]

rabbit_host = controller

rabbit_userid = openstack

rabbit_password = RABBIT_PASS

 

[vnc]

vncserver_listen = $my_ip

vncserver_proxyclient_address = $my_ip

 

[glance]

host = controller

 

[oslo_concurrency]

lock_path = /var/lib/nova/tmp

 

[neutron]

url = http://controller:9696

auth_url = http://controller:35357

auth_plugin = password

project_domain_id = default

user_domain_id = default

region_name = RegionOne

project_name = service

username = neutron

password = NEUTRON_PASS

 

service_metadata_proxy = True

metadata_proxy_shared_secret = METADATA_SECRET

------------------------------------------------------------------------------------------------------


$ sudo vi /etc/neutron/neutron.conf


[DEFAULT]

 

verbose = True

core_plugin = ml2

service_plugins = router,networking_sfc.services.flowclassifier.plugin.FlowClassifierPlugin,networking_sfc.services.sfc.plugin.SfcPlugin

auth_strategy = keystone

allow_overlapping_ips = True

...

notify_nova_on_port_status_changes = True

...

notify_nova_on_port_data_changes = True

...

nova_url = http://controller:8774/v2

rpc_backend=rabbit

 

[agent]

...

root_helper = sudo /usr/bin/neutron-rootwrap /etc/neutron/rootwrap.conf

 

[keystone_authtoken]

auth_uri = http://controller:5000

auth_url = http://controller:35357

auth_plugin = password

project_domain_id = default

user_domain_id = default

project_name = service

username = neutron

password = NEUTRON_PASS

 

[database]

connection = mysql+pymysql://neutron:NEUTRON_DBPASS@controller/neutron

 

[nova]

auth_url = http://controller:35357

auth_plugin = password

project_domain_id = default

user_domain_id = default

region_name = RegionOne

project_name = service

username = nova

password = NOVA_PASS

 

[oslo_concurrency]

...

lock_path = $state_path/lock

...

 

[oslo_messaging_rabbit]

...

...

rabbit_host = controller

rabbit_userid = openstack

...

rabbit_password = RABBIT_PASS

...

 

[sfc]

drivers=ovs

------------------------------------------------------------------------------------------------------


$ sudo vi /etc/neutron/l3_agent.conf


[DEFAULT]

..

verbose = True

...

interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver

...

...

external_network_bridge = br-ex

router_delete_namespaces = True

...

...

agent_mode = legacy

...

------------------------------------------------------------------------------------------------------


$ sudo vi /etc/neutron/dhcp_agent.conf


[DEFAULT]

...

verbose = True

...

interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver

...

dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq

...

use_namespaces = True

...

enable_isolated_metadata = True

...

enable_metadata_network = True

...

dnsmasq_config_file = /etc/neutron/dnsmasq-neutron.conf

...

dhcp_delete_namespaces = True

...

------------------------------------------------------------------------------------------------------


$ sudo vi /etc/neutron/metadata_agent.conf


[DEFAULT]

...

verbose = True

auth_uri = http://controller:5000

auth_url = http://controller:35357

auth_region = RegionOne

auth_plugin = password

project_domain_id = default

user_domain_id = default

...

project_name = service

username = neutron

password = NEUTRON_PASS

nova_metadata_ip = controller

...

nova_metadata_port = 8775

...

...

metadata_proxy_shared_secret = METADATA_SECRET

...

------------------------------------------------------------------------------------------------------


$ sudo vi /etc/neutron/plugins/ml2/ml2_conf.ini


[ml2]

...

type_drivers = flat,vlan,gre,vxlan,geneve

...

tenant_network_types = vxlan

...

mechanism_drivers = openvswitch

...

extension_drivers = port_security

...

 

[ml2_type_vxlan]

...

vni_ranges = 1001:2000

...

 

[securitygroup]

...

firewall_driver = neutron.agent.firewall.NoopFirewallDriver

...

------------------------------------------------------------------------------------------------------


$ sudo vi /etc/neutron/plugins/ml2/openvswitch_agent.ini


[ovs]

integration_bridge = br-int

...

tunnel_bridge = br-tun

...

local_ip = <eth2 IP>

...

 

[agent]

tunnel_types = vxlan

...

vxlan_udp_port = 4789

...


3-2. Compute01, Compute02


sudo vi /etc/nova/nova.conf


[DEFAULT]

dhcpbridge_flagfile=/etc/nova/nova.conf

dhcpbridge=/usr/bin/nova-dhcpbridge

logdir=/var/log/nova

state_path=/var/lib/nova

lock_path=/var/lock/nova

force_dhcp_release=True

libvirt_use_virtio_for_bridges=True

verbose=True

ec2_private_dns_show_ip=True

api_paste_config=/etc/nova/api-paste.ini

enabled_apis=ec2,osapi_compute,metadata

 

rpc_backend = rabbit

 

auth_strategy = keystone

 

my_ip = <Compute Node eth1 IP>

 

network_api_class = nova.network.neutronv2.api.API

security_group_api = neutron

linuxnet_interface_driver = nova.network.linux_net.LinuxOVSInterfaceDriver

firewall_driver = nova.virt.firewall.NoopFirewallDriver

 

[oslo_messaging_rabbit]

rabbit_host = controller

rabbit_userid = openstack

rabbit_password = RABBIT_PASS

 

[keystone_authtoken]

auth_uri = http://controller:5000

auth_url = http://controller:35357

auth_plugin = password

project_domain_id = default

user_domain_id = default

project_name = service

username = nova

password = NOVA_PASS

 

[vnc]

enabled = True

vncserver_listen = 0.0.0.0

vncserver_proxyclient_address = $my_ip

novncproxy_base_url = http://<Controller Node br-ex IP>:6080/vnc_auto.html

 

[glance]

host = controller

 

[oslo_concurrency]

lock_path = /var/lib/nova/tmp

 

[neutron]

url = http://controller:9696

auth_url = http://controller:35357

auth_plugin = password

project_domain_id = default

user_domain_id = default

region_name = RegionOne

project_name = service

username = neutron

password = NEUTRON_PASS

------------------------------------------------------------------------------------------------------


$ sudo vi /etc/neutron/neutron.conf


[DEFAULT]

...

verbose = True

...

core_plugin = ml2

service_plugins = router,networking_sfc.services.flowclassifier.plugin.FlowClassifierPlugin,networking_sfc.services.sfc.plugin.SfcPlugin

auth_strategy = keystone

allow_overlapping_ips = True

...

rpc_backend=rabbit

 

[agent]

...

root_helper = sudo /usr/bin/neutron-rootwrap /etc/neutron/rootwrap.conf

 

[keystone_authtoken]

auth_uri = http://controller:5000

auth_url = http://controller:35357

auth_plugin = password

project_domain_id = default

user_domain_id = default

project_name = service

username = neutron

password = NEUTRON_PASS

 

[oslo_concurrency]

...

lock_path = $state_path/lock

...

 

[oslo_messaging_rabbit]

...

...

rabbit_host = controller

rabbit_userid = openstack

...

rabbit_password = RABBIT_PASS

...

 

[sfc]

drivers=ovs

------------------------------------------------------------------------------------------------------


$ sudo vi /etc/neutron/plugins/ml2/ml2_conf.ini


[ml2]

...

type_drivers = flat,vlan,gre,vxlan,geneve

...

tenant_network_types = vxlan

...

mechanism_drivers = openvswitch

...

extension_drivers = port_security

...

 

[ml2_type_vxlan]

...

vni_ranges = 1001:2000

...

 

[securitygroup]

...

firewall_driver = neutron.agent.firewall.NoopFirewallDriver

...

------------------------------------------------------------------------------------------------------


$ sudo vi /etc/neutron/plugins/ml2/openvswitch_agent.ini


[ovs]

integration_bridge = br-int

...

tunnel_bridge = br-tun

...

local_ip = <eth2 IP>

...

 

[agent]

tunnel_types = vxlan

...

vxlan_udp_port = 4789

...



4. 사용자 UI 접속 및 서비스 사용


horizon url : http://<Controller Node br-ex IP>/horizon


user name : admin

password : admin




#############################################################


많은 도움을 주신 KT 정치욱님께 깊은 감사를 드립니다.

'Cloud Native > Research_OpenStack' 카테고리의 다른 글

OpenStack SFC Flow 분석  (1) 2016.07.06
OpenStack with networking-sfc (devstack)  (6) 2016.03.23
OpenStack REST 확인  (0) 2015.10.19
OpenStack DVR - SlideShare  (0) 2015.06.29
OpenStack DVR – DNAT Traffic  (0) 2015.06.29