Getting Around
Browsing OpenStack documentation we can see that the driver must support the following set of features:- Volume Create/Delete
- Volume Attach/Detach
- Snapshot Create/Delete
- Create Volume from Snapshot
- Get Volume Stats
- Copy Image to Volume
- Copy Volume to Image
- Clone Volume
So it is quite natural to start with volume creation. Unfortunately the description above doesn't tell much about the semantics of these operations. A bit more digging round points to the cinder.volume.driver documentation which says:
create_volume(volume) creates a volume. Can optionally return a Dictionary of changes to the volume object to be persisted.
...
fields = {'migration_status': String(default=<class 'oslo_versionedobjects.fields.UnspecifiedDefault'>,nullable=True), 'provider_id': UUID(default=<class 'oslo_versionedobjects.fields.UnspecifiedDefault'>,nullable=True), 'availability_zone': String(default=<class 'oslo_versionedobjects.fields.UnspecifiedDefault'>,nullable=True), 'terminated_at': DateTime(default=<class 'oslo_versionedobjects.fields.UnspecifiedDefault'>,nullable=True)
...
You get the idea. It turns out that Volumes are stored in a database so there is also a matching database schema in models.py which is about as useful.
So forget about documentation, let's dive in the source tree...
Back to the source
Since my goal was to implement NFS-based volume, I examined the existing NfsDriver which can be used by itself or as a base class for many other drivers. It is based on RemoteFsDriver which provides common code for all NFS drivers. Hopefully this should provide enough support for the new driver - I just need to add a few API calls to communicate with the actual appliance...The first question I wanted to answer from the source was the semantics of the create_volume() call. The RemoteFsDriver provides some hints: the call returns a dictionary
volume['provider_location'] = self._find_share(volume['size']) self._do_create_volume(volume) return {'provider_location': volume['provider_location']}
This provider_location turns out to be a string of the form host:/path/to/remote/share that is used by the mount command to mount the NFS share.
A few NFS drivers that I looked at behaved in the following way:
- Configuration provides location of a file that lists available shares;
- Drivers provide some code that selects share suitable for the new volume and stick its NFS path into provider_location attribute;
- The share path contains big files that represent volumes;
- All shares are always kept mounted on the cinder node;
What I wanted to do was somewhat different - I wanted to keep 1:1 relationship between a volume and a share. This means that there is no file describing the share - shares are created on demand as volumes are created. Also since we may have a lot of volumes I didn't want to keep them mounted all the time and only mount them as needed. The benefit is that it is very easy to manage snapshots and clones since they are first class citizens on the actual appliance.
It turned out that in spite of all the existing generic code around NFS drivers all of it was useless in my situation because RemoteFsDriver assumed the wrong model. So I had to do everything from scratch. The only thing I was able to reuse was the RemoteFsClient from remotefs_brick which wasn't particularly useful either but I had to use for reasons that I'll explain in another post. The only service it provides is an ability to run mount command to mount an NFS share.
Conclusions
I was actually quite surprised to see such a dismal quality of the developer documentation and the actual implementation for something as hyped as core part of OpenStack. Compare it for example, with Docker Volume Plugin documentation (and implementations) and you'll see a huge difference. Volume plugins are small, simple, can be implemented in any language and clearly described.
No comments:
Post a Comment