|
Post by gericson on Aug 16, 2017 16:16:20 GMT
The thinking is that these groups are relatively small and don't change frequently. So, simply create/delete or update the endpoint group.
|
|
|
Post by gericson on Aug 16, 2017 15:52:17 GMT
leekd : Yes, if you specify odata.metadata=Full on the request. Or... in in newest core vocabulary if the schema specifies AutoMetadata with arguments of Count and NavigationLink.
mwalton : The NextLink and count are returned on a partial result if odata.metadata = Full or Minimal, Otherwise, the NextLink and count are only recommended to be returned on a partial result. Also note the difference in examples: leekd example uses a collection(). mwalton example uses a ResourceCollection subclass.
|
|
|
Post by gericson on May 29, 2017 19:06:05 GMT
tophercantrell: There are some additional options and things to consider. 1) Redfish protocol is based on the OData protocol, so for more background see there. For most purposes Redfish is a proper subset. 2) Redfish protocol supports the $skip query parameter. So for example, you can skip the first 4000 records: GET "/redfish/v1/ExampleEntryCollection?$Skip=4000" 3) An entity type that is a Redfish resource collection is not an OData collection, but for the most part, Redfish treats it as one. 4) OData conformant access to the collected entities of a resource collection would look like: GET "/redfish/v1/ExampleEntryCollection/Members?$Skip=4000" NOTE: For various reasons, the client should generally not rely on returning the same members or in the same order across multiple GETs for the same resource. 5) In the schema, you noted that the logEntryCollection/Members navigation property is annotated with <Annotation Term="OData.AutoExpand"/> This is a default behavior. You can override this by adding the /$ref segment to the URL to have it return only the references. GET "/redfish/v1/ExampleEntryCollection/$ref 6) The nextlink, should be appropriate to continue the request. For that reason, the /$ref segment should not be in the Nextlink response unless references are being returned. The example should show. "Members@odata.nextLink": "/redfish/v1/ExampleEntryCollection/Page2"
|
|
|
Post by gericson on May 18, 2017 12:40:07 GMT
In your scenario: - ProvisionedBytes is as you describe.
- AllocatedBytes is the amount of currently available storage that is currently allocated to this pool/volume.
- GuaranteedBytes is as you describe only if no other pools/volumes can be allocated from the available storage. If multiple pools/volumes may be allocated from the available storage. Then GuaranteedBytes represents the amount reserved for the exclusive use of the pool/volume.
- ConsumedBytes: is amount of the AllocatedBytes that has been consumed.
|
|
|
Post by gericson on May 18, 2017 12:22:21 GMT
Hari, you are correct. This description is for a DriveCollection resource. It should read: "If present, the value shall be a reference a DriveCollection resource that includes a Members property that is an array of zero or more references to contributing drives."
|
|
|
Post by gericson on May 11, 2017 18:41:52 GMT
1) At it's simplest, you don't need to specify source volumes/disks/pools. The service implementation should be capable of finding suitable resources simply based on the requested ClassesOfService. However, if you may specify volumes/disks/pools via CapacitySources. If so, and if ClassesOfService are also specified, then only those that meet the requirements of the listed Classes of Service should actually be used. If not enough, I expect the allocation should fail.
To create a StoragePool, 1) Identify the StorageService. For example /StorageServices(1) 2) Identify the ClassOfService. For example /StorageServices(1)/ClassesOfService(SSD) 3) Identify a set of candidate volumes as source. For example /StorageServices(1)/Volumes(223) and /StorageServices(1)/Volumes(537) 4) POST to /StorageServices(1)/StoragePools with a request body (example follows): { "Name": "BasePool", "Description": "Base storage pool", "BlockSizeBytes": 512, "Capacity": { "Data": { "ProvisionedBytes": 10995116277760 }, "CapacitySources": [{ "ProvidingVolumes": [ {"@odata.id": /redfish/v1/StorageServices(1)/Volumes(223)}, {"@odata.id": /redfish/v1/StorageServices(1)/Volumes(537)}] }], "ClassesOfService": [{"@odata.id": "StorageServices(1)/ClassesOfService(SSD)"}] }
2) For StorageGroups, the assumption is that especially the various endpoint groups would be have a longer lifecycle and would be managed separately, especially for enterprise class storage servers.
So, the process is something like.
a) Identify client endpoints for the applications that will have access
b) In StorageServices(1), Create client endpoint groups based on redundancy and access characteristics
c) Similar to a,b for server endpoints. Note that the choice of server side endpoints can alternatively be deferred to the implementation, as influenced by a ClassOfService for the StorageGroup.
d) Select the volumes required by the application into a VolumeCollection.
e) Assuming the above StorageService(), POST to StorageServices(1)/StorageGroups
{
"Name": "SG_abc_005",
"Description": "System SATA",
"VolumesAreExposed": false,
"MembersAreConsistent": true,
"AccessState": "Active/Optimized",
"ClientEndpointGroups": {
"Members": [
{"@odata.id": "/redfish/v1/StorageServices(1/ClientEndpointGroups(Path1)"},
{"@odata.id": "/redfish/v1/StorageServices(1/ClientEndpointGroups(Path2)"}
]
},
"ServerEndpointGroups": {
"Members": [
{"@odata.id": "/redfish/v1/StorageServices(1/ServerEndpointGroups(PathA)"},
{"@odata.id": "/redfish/v1/StorageServices(1/ServerEndpointGroups(PathB)"}
]
},
"Volumes": {
"Members": [
{"@odata.id": /redfish/v1/StorageServices(1)/Volumes(223)},
{"@odata.id": /redfish/v1/StorageServices(1)/Volumes(537)} ]
},
"Links": {
"ClassOfService": {"@odata.id": "/redfish/v1/StorageServices(1/ClassesOfService(SSD)"}
}
}
|
|
|
Post by gericson on May 11, 2017 18:22:05 GMT
Not at this time. The Mockups under SSM/Mockups/StorageServices/[1,2,Simple1]/StorageGroups all contain good candidates for upgrading to show this. You are welcome to contribute enhancements to these mockups.
|
|
|
Post by gericson on Apr 20, 2017 12:32:04 GMT
Q1: Is there any other reason behind defining multiple server endpoints to single LUN/Volume?
A1: This functionality supports a very common scenario for network based storage known as multi-pathing. In SCSI, this is supported by a feature known as Asynchronous Logical Unit Access (ALUA). Typically access to these multiple paths is serialized by a multi-path driver on the initiator side. The multi-path driver is responsible for recognizing that a logical unit (volume) has been exposed via multiple paths. The multi-path driver then generally presents the volume as a single logical disk to the operating system.
Q2: Is there any attribute/class maps LUN to specific ServerEndPoint (One-To-One) rather than doing it as a group?
A2: No. This functionality is used for SAN access to a Volume and there are typically multiple paths to support availability and performance considerations. Note 1: StorageGroup has a ClassOfService. The availability and performance aspects of that ClassOfService should govern the minimum number and type of server side endpoints configured by the service implementation.
|
|
|
Post by gericson on Apr 19, 2017 17:23:50 GMT
RecoveryTimeObject specified in DataStorageCapabilities refers to the time to recover access to the primary storage. RecoveryTimeObject specified in DataProtectionLoSCapabilities refers to the time to recover access via the secondary storage (i.e. the replica).
|
|
|
Post by gericson on Apr 19, 2017 17:20:00 GMT
A volume is exposed by adding it to a StorageGroup/Volumes. It is exposed via the network paths defined by endpoints belonging to the ClientEndPointGroups and ServerEndpointGroups.
The actions ExposeVolumes and HideVolumes are used to map or unmap the volumes.
|
|
|
Post by gericson on Apr 19, 2017 17:15:18 GMT
The properties: SupportedProvisioningPolicies,SupportedRecoveryTimeObjectives and SupportsSpaceEfficiency, each provide a supported value range. However, not all values in each range are necessarily supported in combination with supported values from the other properties. These are useful to an administrator that desires to create a new line of service.
The property SupportedDataStorageLinesOfService represents a set of choices across the above properties that the service implementation understands and is capable of supporting collectively. These are useful for selecting supported lines of service when creating a new class of service.
|
|
|
Post by gericson on Apr 4, 2017 21:02:30 GMT
Creating separate client and server endpoint groups simplifies management of Volume (LUN) mapping and masking. This is largely driven by SCSI, but is not limited to that use. Note that the AccessState property can be used to express SCSI ALUA characteristics. A storage group is associated to a collection of volumes. These would often be associated to a particular application. The client endpoint group limits exposure of those volumes to specific client endpoints. Note that these may be virtual. Similarly, the server endpoint groups limit access to a specified set of the servers endpoints, this might be done for availability, performance, reliability, or security reasons.
|
|
|
Post by gericson on Jan 24, 2017 16:40:47 GMT
Adding to Jeff's reply. OriginResources allows a subscriber to limit the subscription to only those events generated by: a specified resource, a resource in a specified collection, a subordinate resource of a specified resource. In the Redfish world, OriginResources might refer to a specific Chassis, this would cover that Chassis as well as Thermal and Power and their underlying resources.
Note the OriginResources is specified as a collection of Resource.Item. This gives the subscriber some added flexibility in refining the subscription. For example, this allows OriginResources to be limited to a specific PowerControl entity.
|
|