( Younjin.firstname.lastname@example.org, 정윤진)
I’ll try to explain with this post about how to make swift storage cluster works with keystone service. If you have been worked with this open-source based project, you might get struggled, or suffered to configure because there’re not many information about this brand-new cluster. It’s because not only less information, there’re so many old versions and instructions, which means there’re not many certified working set exists. The Launchpad site is some kind of chaos. If you goolge about it, there’re only questions without right answers, and closed issues without comments. One another cause is that this project is so young, and things get changed very fast. All of those reasons will make you get angry to use this, and you may throw your keyboards during setup this brand-new cluster. The hardest thing is that understanding about how the keystone works, and how it can be plugged into swift-proxy’s middle ware. I’ll do not explain about what the keystone is, and what is the swift is as well. I’ll try to give you the instructions what is the result of my research about it. I also made swift + swauth system before, but it’s not that hard, and the swauth is not welcome in nowadays.
One more thing to let you know is, this research is based on multi-node cluster use. All components are installed on physically different server, and it’s network is combined with Arista 10G network with iBGP and eBGP with Quagga. But network is not major issue on this post, so I’ll keep it for next time.
The storage cloud install on multiple nodes went like this.
1. Install and connect every node physically. Power cord, TwinAx, UTP, etc.
2. Setup the server BIOS and IPMI.
3. Configure all switches.
4. Install Ubuntu 12.04 on management server.
5. Write Chef code for automated install.
6. Prepare all nodes with PXE boot and network install.
7. Update each configurations to every node by using chef.
8. Check its functionalities.
Working environment as follows :
1. Keystone server which is running with 192.168.1.200
2. Swift proxy server running with 192.168.1.2/24 for admin, and it has 10G interface for services. I made a simple rule about network expansion, so the 10G network has similar IP structure, such as 10.0.1.2/24. And it also have ipmi network for physical control, such as boot order, power managements, its IP is 192.168.1.102. Every cabinet is designed to use /24 network.
3. Swift storage servers running with 192.168.1.3 – 10 , also has 10G for 10.0.1.3 – 10.0.1.10
4. Swift version is 1.6.1, you can get it from openstack github.
5. Keystone is also available on openstack github.
6. Each storage server has 12 disks for store, and 2 ssd disks for OS.
7. Ubuntu version is 12.04 LTS.
8. Quanta servers were used. ( X22-RQ, X12 series )
9. Arista 7124SX per cabinet.
10. Cisco 2960 per cabinet.
Bluesnap-XP, RS-232 to Bluetooth
Every servers ( a.k.a bare-metals ) are installed by automation tool, chef and pxe boot. I’ve downloaded all packages from github.com, and you can also find the url easily. To install swift successfully, there’re some python modules are needed. If there are no modules exists, then the python will show you error message and you’ll be get which python modules is needed easily. If you don’t know the exact name of the package, then you can search it by typing “apt-get update ; apt-cache search <string>”.
I have installed additional python modules packages as below.
root@allnew-quanta:/root# apt-get install python-eventlet python-netifaces \ python-pastedeploy python-webob openssl libssl-dev \ python-setuptools python-lxml python-libxslt1 python-dev
After install the swift, you’ll need to configure storage server, such as xfs file system, mount all of it, and build rings. This configurations are well documented in swift multiple-node install, so I’ll not describe about it. More important thing is, setup the swift proxy server. As you may know, the proxy server should have keystone middle ware on it. So, you need to install keystone with swift on proxy server. It also can be easily done with python setup.py install. After install it, you need to setup the proxy-server.conf. Here’s the recommended ( I mean the basic ) configuration for it.
[DEFAULT] cert_file = /etc/swift/cert.crt key_file = /etc/swift/cert.key bind_port = 8080 user = swift log_facility = LOG_LOCAL1 workers = 5 [pipeline:main] pipeline = catch_errors healthcheck cache authtoken keystone proxy-server [app:proxy-server] use = egg:swift#proxy account_autocreate = true [filter:keystone] paste.filter_factory = keystone.middleware.swift_auth:filter_factory operator_roles = admin, swiftoperator [filter:authtoken] paste.filter_factory = keystone.middleware.auth_token:filter_factory # Delaying the auth decision is required to support token-less # usage for anonymous referrers ('.r:*') or for tempurl/formpost # middleware. delay_auth_decision = 0 auth_port = 35357 auth_protocol = http auth_host = 192.168.1.200 auth_token = ADMIN admin_token = ADMIN [filter:cache] use = egg:swift#memcache set log_name = cache [filter:catch_errors] use = egg:swift#catch_errors [filter:healthcheck] use = egg:swift#healthcheck
You may install additional python module needed to run swift proxy. It’s same story. Install the additional packages by referencing error messages.
After finish swift cluster, you’ll need to configure keystone server. You may consider high-availability for keystone service. Keystone can be sit on sqlite and mysql both, so you can find some way from goole search.
Here’s the keystone.conf configuration which is running with database. As you may know, there’s a way to use static file to setup, but if you want to use a feature, such like sharing, then you’ll need database to manage keystone.
[DEFAULT] # A "shared secret" between keystone and other openstack services admin_token = ADMIN # The IP address of the network interface to listen on bind_host = 0.0.0.0 # The port number which the public service listens on public_port = 5000 # The port number which the public admin listens on admin_port = 35357 # The port number which the OpenStack Compute service listens on # compute_port = 8774 # === Logging Options === # Print debugging output verbose = True # Print more verbose output # (includes plaintext request logging, potentially including passwords) debug = True # Name of log file to output to. If not set, logging will go to stdout. log_file = keystone.log # The directory to keep log files in (will be prepended to --logfile) log_dir = /var/log/keystone # Use syslog for logging. # use_syslog = False # syslog facility to receive log lines # syslog_log_facility = LOG_USER # If this option is specified, the logging configuration file specified is # used and overrides any other logging options specified. Please see the # Python logging module documentation for details on logging configuration # files. # log_config = logging.conf # A logging.Formatter log message format string which may use any of the # available logging.LogRecord attributes. #log_format = %(asctime)s %(levelname)8s [%(name)s] %(message)s # Format string for %(asctime)s in log records. log_date_format = %Y-%m-%d %H:%M:%S # onready allows you to send a notification when the process is ready to serve # For example, to have it notify using systemd, one could set shell command: # onready = systemd-notify --ready # or a module with notify() method: onready = keystone.common.systemd [sql] # The SQLAlchemy connection string used to connect to the database #connection = sqlite:////var/lib/keystone/keystone.db connection = mysql://keystone:XXXXX@localhost/keystone # the timeout before idle sql connections are reaped # idle_timeout = 200 [identity] driver = keystone.identity.backends.sql.Identity [catalog] # dynamic, sql-based backend (supports API/CLI-based management commands) driver = keystone.catalog.backends.sql.Catalog # static, file-based backend (does *NOT* support any management commands) #driver = keystone.catalog.backends.templated.TemplatedCatalog template_file = /etc/keystone/default_catalog.templates [token] driver = keystone.token.backends.kvs.Token # Amount of time a token should remain valid (in seconds) expiration = 86400 [policy] driver = keystone.policy.backends.rules.Policy [ec2] # driver = keystone.contrib.ec2.backends.kvs.Ec2 [ssl] #enable = True #certfile = /etc/keystone/ssl/certs/keystone.pem #certfile = /etc/keystone/cert.crt #keyfile = /etc/keystone/ssl/private/keystonekey.pem #keyfile = /etc/keystone/cert.key #ca_certs = /etc/keystone/ssl/certs/ca.pem #cert_required = True [signing] certfile = /etc/keystone/ssl/certs/signing_cert.pem keyfile = /etc/keystone/ssl/private/signing_key.pem #ca_certs = /etc/keystone/ssl/certs/ca.pem #key_size = 1024 #valid_days = 3650 #ca_password = None [ldap] # url = ldap://localhost # user = dc=Manager,dc=example,dc=com # password = None # suffix = cn=example,cn=com # use_dumb_member = False # user_tree_dn = ou=Users,dc=example,dc=com # user_objectclass = inetOrgPerson # user_id_attribute = cn # user_name_attribute = sn # tenant_tree_dn = ou=Groups,dc=example,dc=com # tenant_objectclass = groupOfNames # tenant_id_attribute = cn # tenant_member_attribute = member # tenant_name_attribute = ou # role_tree_dn = ou=Roles,dc=example,dc=com # role_objectclass = organizationalRole # role_id_attribute = cn # role_member_attribute = roleOccupant [filter:debug] paste.filter_factory = keystone.common.wsgi:Debug.factory [filter:token_auth] paste.filter_factory = keystone.middleware:TokenAuthMiddleware.factory [filter:admin_token_auth] paste.filter_factory = keystone.middleware:AdminTokenAuthMiddleware.factory [filter:xml_body] paste.filter_factory = keystone.middleware:XmlBodyMiddleware.factory [filter:json_body] paste.filter_factory = keystone.middleware:JsonBodyMiddleware.factory [filter:user_crud_extension] paste.filter_factory = keystone.contrib.user_crud:CrudExtension.factory [filter:crud_extension] paste.filter_factory = keystone.contrib.admin_crud:CrudExtension.factory [filter:ec2_extension] paste.filter_factory = keystone.contrib.ec2:Ec2Extension.factory [filter:s3_extension] paste.filter_factory = keystone.contrib.s3:S3Extension.factory [filter:url_normalize] paste.filter_factory = keystone.middleware:NormalizingFilter.factory [filter:stats_monitoring] paste.filter_factory = keystone.contrib.stats:StatsMiddleware.factory [filter:stats_reporting] paste.filter_factory = keystone.contrib.stats:StatsExtension.factory [app:public_service] paste.app_factory = keystone.service:public_app_factory [app:admin_service] paste.app_factory = keystone.service:admin_app_factory [pipeline:public_api] pipeline = stats_monitoring url_normalize token_auth admin_token_auth xml_body json_body debug ec2_extension user_crud_extension public_service [pipeline:admin_api] pipeline = stats_monitoring url_normalize token_auth admin_token_auth xml_body json_body debug stats_reporting ec2_extension s3_extension crud_extension admin_service [app:public_version_service] paste.app_factory = keystone.service:public_version_app_factory [app:admin_version_service] paste.app_factory = keystone.service:admin_version_app_factory [pipeline:public_version_api] pipeline = stats_monitoring url_normalize xml_body public_version_service [pipeline:admin_version_api] pipeline = stats_monitoring url_normalize xml_body admin_version_service [composite:main] use = egg:Paste#urlmap /v2.0 = public_api / = public_version_api [composite:admin] use = egg:Paste#urlmap /v2.0 = admin_api / = admin_version_api
Now, you can start keystone service by using keystone-all. If you need service management for keystone, then you can make some script for chkconfig. There are good documents already exists.
Oh, before you start your keystone service with mysql, you need to configure mysql service. Create database and user, then give some proper privileges to it. You can see the database connection string in above configuration.
Before you setup the keystone, you may need to understand about how the tenant/user/role/key work with account/user/key in swift. If you already experienced how to use swift api, then you know the credential goes like account:user and key pair. The tenant matches with account, and user is user.
Here’s the instructions about how to setup the keystone, and please remember that our proxy-server.conf has allowed the role “admin” and “swiftoperators”, and the keystone.conf has default admin key, which is “ADMIN”.
First of all, install the python-keystoneclient for your setup system. There’re no mac version. Install it on your keystone server. Openstack keystone package does not contain the “keystone” tool, so you’ll need it if you didn’t install keystone with apt-get.
a. Create a tenant.
root@allnew-quanta: ~# keystone --username admin --token ADMIN --endpoint http://192.168.1.200:35357/v2.0 tenant-create --name=service
Note that the “endpoint” assigned in command line. You can specify it as OS environment variable.
b. Then, create an user. Type “keystone –token ADMIN –endpint YOURENDPOINT tenant-list” to see the ID.
root@allnew-quanta: ~# keystone --token ADMIN --endpoint http://192.168.1.200:35357/v2.0 user-create --name=swift --pass=swiftadmin --tenant_id=ID
c. Create services. Keystone is usually assigned as identity service, and Swift is object-store.
root@allnew-quanta: ~# keystone --token ADMIN --endpoint http://192.168.1.200:35357/v2.0 service-create --name=swift --type=object-store --description="Swift Service" root@allnew-quanta: ~# keystone --token ADMIN --endpoint http://192.168.1.200:35357/v2.0 service-create --name=keystone --type=identity --description="Keystone Identity Service"
d. Attach the service with endpoint.
root@allnew-quanta: ~# keystone --token ADMIN --endpoint http://172.17.17.76:35357/v2.0 endpoint-create --region RegionOne --service_id=SWIFT_SERVICE_GUID --publicurl 'https://192.168.1.2:8080/v1/AUTH_%(tenant_id)s' --adminurl 'https://192.168.1.2:8080/' --internalurl ' https://192.168.1.2:8080/v1/AUTH_%(tenant_id)s' root@allnew-quanta: ~# keystone --token ADMIN --endpoint http://192.168.1.200:35357/v2.0 endpoint-create --region RegionOne --service_id=KEYSTONE_SERVICE_GUID --publicurl 'http://192.168.1.200:5000/v2.0' --adminurl 'http://192.168.1.200:35357/v2.0' --internalurl 'http://192.168.1.200:5000/v2.0'
e. Crete roles.
root@allnew-quanta: ~# keystone --token ADMIN --endpoint http://192.168.1.200:35357/v2.0 role-create --name=admin root@allnew-quanta: ~# keystone --token ADMIN --endpoint http://192.168.1.200:35357/v2.0 role-create --name=swiftoperator
f. Attach users to a role.
root@allnew-quanta: ~# keystone --token ADMIN --endpoint http://192.168.1.200:35357/v2.0 user-role-add --tenant_id=TENANT_ID --user=USER_ID --role=ROLE_ID
With swift tool ( the swift python client tool ), you can check it’s work or not. If it works fine, then you can see the results as below.
root@allnew-quanta:/etc/keystone# swift -V 2 -A http://192.168.1.200:35357/v2.0 -U service:swift -K swiftadmin stat Account: AUTH_6049fcdd4c3a46909a9dbaad04f1636a Containers: 0 Objects: 0 Bytes: 0 Accept-Ranges: bytes X-Timestamp: 1345122033.16266 X-Trans-Id: tx166227c25e604e4db4c5bdc9039041a4
This means, now you can do CRUD to swift storage cluster with swift tool.
root@allnew-quanta: ~# swift -V 2 -A http://192.168.1.200:35357/v2.0 -U service:swift -K swiftadmin upload test keystone.conf keystone.conf root@allnew-quanta: ~# swift -V 2 -A http://192.168.1.200:35357/v2.0 -U service:swift -K swiftadmin list test
Basically, the keystone controls authentications for users. More than that, it can be used to control container sharing by using tenants. Currently, I’m focusing on how to implement it as an enterprise service and how it can be installed with Chef. So, will not explain about it more.
As I mentioned at top of this post, install swift and keystone on multiple nodes is not simple work for now. I have no doubt that there will be good methods to install this clusters will come out later by many contributors. But for now, well until now, it was not an easy work to do. And its why I wrote this post at this moment.
If you’re a web developer, you may interested in how to implement it with Keystone and Swift API for your applications. There’s good explains about how to do this with curl. To understand its action more, then visit this blog.
Since the cluster works, you can test many things about to running cloud storage and S3 like infrastructure. And that’s exactly what I’m working for.
Hope you’ll have successfully working set of private storage cloud on your own.
( Younjin.email@example.com, 정윤진)