|
Post by j2hilland on Jan 25, 2022 19:14:45 GMT
That presentation is very old, but the Fabrics model hasn't changed significantly since it was published. There are artifacts under development, and there is the CXL work in progress release to go look at (but it is really CXL specific) as well as Gen-Z presentations.
Support in the Fabric model has been there for PCIe since the first release of the Fabric model. And Switch of SwitchType=PCIe has been there since day one. There isn't a PCIe specific section in the switch, but that's because we haven't found a property that is needed other than the base properties. If you find some properties are needed, let us know. The last property added for PCIe was in 2020 and I think that was a link to the PCIe device that can be used to communicate with the switch.
Everything you need should be published in Fabric, Switch, Zone, Port, etc. and you should be able to trace from a PCIe device, through a switch, etc. Everything we know about for PCIe is released and there should be no need to use an OEM extension. If you find something you missed, please let us know.
|
|
|
Post by j2hilland on Apr 16, 2021 21:18:06 GMT
|
|
|
Post by j2hilland on Feb 11, 2020 19:33:55 GMT
ConsumingComputerSystems and SupplyingComputerSystems were put in for Composition model and were intended for Partitioned systems, where a larger ComputerSystems are composed from smaller ComputerSystems. Keep this in mind in what you are doing as generic clients may expect this usage. We have a containment heirarchy in Chassis but not really in ComputerSystem. There is no URI limit specified in Redfish. There are URI limits in the RFCs, but also limits in implementations and there are buffer limits in smaller environments. We are advising against using strings like "System" or "Subsystem" but using UUIDs and other unique qualifiers for collection members is advisable. General advice is to keep it as small as possible as the URI is not really meant to be human parsable (but the Name property could be).
Can you describe your use case? If this is, for example, aggregation, then this is not how we would advise to represent the system.
|
|
|
Post by j2hilland on Feb 11, 2020 19:23:24 GMT
I would like to see it properly described as well. The closest mechanism we have is roles assigned to accounts, so if you are looking to distinguish behavior, it should probably be based on account and not based on ingress method.
|
|
|
Post by j2hilland on Feb 11, 2020 19:19:19 GMT
When we changed it, we assumed that the Login privilege would only pertain to the current account and not allow viewing of other accounts. It looks like that description was left off in the Privilege Registry. DMTF needs to update the description of Login privilege to ensure that this only allows getting records from either AccountCollection or SessionCollection that pertain to the current account. Right now it reads like anyone can access the information and that was not the intention.
|
|
|
Post by j2hilland on May 23, 2019 19:03:02 GMT
But you are proposing a POST of a Chassis - and ChassisType has no required semantics. So this approach is problematic (to allow some to work and others not when there are no real restrictions (no shalls) on the use of ChassisType). And the Contains and ContainedBy objects are currently Read only. When you combine the things you want to do with permissions, it also doesn't follow Redfish (permission is applicable across the whole collection, not on specific members). I believe it is more problematic than you describe and there are many corner cases.
We designed Redfish so that a NorthBound interface would not directly manipulate the Chassis objects. Instead, the implementation could do it on it's behalf via a service (which isn't currently modeled). I could see the need to model that service. I would think the aggregation service would be able to provide the chassis and handle the relationships.
The architectural issue to me is - do you allow a Client to directly manipulate the collection (which is so far largely read only) or do you have the Client tell the Service to manipulate the collection. When you start thinking about accounts/permissions and having them on a resource or collection basis, the later becomes more consistent with Redfish modeling.
Your approach could also result in a lack of interoperability between aggregators. Some would allow this, others would auto discover. But customers could use the interface to provide groupings that other aggregators couldn't support. So it could be abused to provide logical groupings. The right answer is to provide a method of groupings for bulk operations through Redfish, which we have not yet attempted to model in Redfish beyond ResourceBlocks (which is a different use case). An aggregation service could probably be the enableming resource for this functionality as well.
I understand from one perspective it may seem like a simple solution to your problem, but for the industry as a whole it is something that either would not work or would not be enough. And it would lead to a lack of interoperability. I still think the right way is to model an aggregation service (thus permissions would be easier) that would perform operations such as this.
|
|
|
Post by j2hilland on May 22, 2019 19:41:29 GMT
I think the problem with changing these collections to allow Insert/Delete is a question of use case. We allow insert when you are truly creating. For example, a ComputerSystem is composed from existing resources by POSTing to the ComputerSystemCollection (based on the capabilities object). What we wouldn't want is secondary actions such as adding a chassis and a computer system automatically shows up. Besides, it sounds to me like you're wanting to use the Manger (for the Row/Rack)'s Northbound Interface to add stuff (which is really what would happen via the Southbound interface, like through Redfish or a provider/plug-in/some).
What I read in your description is that you are aggregating other resources and are really looking to force discovery (either automatically go find resources because you know you added them or to manually add them based on something like an Address). So what you are telling your service is to go perform this discovery on this Address and then your Manager (bad name, so I'm going to start calling it an Aggregation Service) is going to go discover what resources can be added based on what is found at that Address. So you want to use the NorthBound interface to tell it to use it's SouthBound interface to find/proxy the Rows/Racks of equipment.
To me, this is more appropriately modeled as a Service with a Discovery action. Perhaps it is time for Redfish to document how Aggregation works and it seems that an Aggregation Service with some kind of action to force discovery through the Southbound interface would be an appropriate addition to Redfish.
|
|
|
Post by j2hilland on May 14, 2019 19:31:30 GMT
If your point is that a 400 (Bad Request) would be more appropriate, I think you have a point. 501 should really be reserved for if the service doesn't support it at all. This can be determined by ProtocolFeaturesSupported.FilterQuery in the service root but a 501 does seem inappropriate when the client made a mistake by trying to filter a single instance. I would think a 400 would be more appropriate for single instances. The spec currently says everything is a 501 though, so we will see what we can do to bring it in alignment with HTTP expected semantics.
|
|
|
Post by j2hilland on Feb 14, 2019 17:05:40 GMT
Can you help us understand the use case? What files are these? Are you thinking of dump files or something similar? We have talked about support for some things but not a "generic" file store. We have also talked about how to reference images, for instance, for upper level clients.
|
|
|
Post by j2hilland on Oct 16, 2018 19:08:37 GMT
If this was the failure of the Task that was spawned, the 4xx or 5xx would come back but the body would be more like something expected as if the response was synchronous. But some responses (like a Delete that was delayed) wouldn't have a real response body and may only have an error object. And if the error came back from some other reason, you could also only end up with an error object. So the answer is to look into the error object at the @message.ExtendedInfo for Messages and see the content. We should clarify the spec here. The messageArgs, in particular should have pointers to the actual problems. You always have the problem also that an error occurred at either end, so you have to prepare for handling that as well (for example, when the destination is unreachable).
|
|
|
Post by j2hilland on Jan 3, 2018 21:48:45 GMT
What was the HTTP response code in both cases? If you got a 200 on the first one (and the machine turned on) then you know it was accepted. I would also try a ForceOff and see what happens. It does appear to be a bug in the implementation if it indicates it supports ForceRestart and then treats it as an error. As previously stated, contact the vendor. It is also odd to have empty strings for some of the properties like Model, partNumber, SKU, etc. They should probably just be absent if there are no values for them (unless you removed them).
|
|
|
Post by j2hilland on Dec 8, 2017 16:08:05 GMT
We can't make required properties into optional - that's not a backwards compatible change. If you're going through the trouble of creating messages and an OEM messageID, you might as well just create a messageRegistry. You can use the Registry collection off of the service root to inform clients of where to get it.
|
|
|
Post by j2hilland on Dec 7, 2017 21:31:54 GMT
The Message Registry in 8011 is currently intended for ErrorInfo messages returned in a response body from a PUT/POST/PATCH operation. It's not really intended for use with Events. That being said, we are looking at a couple of approaches for standard messages that could be used for Events. The Current Event types are under discussion as well as the traditional Alert (IPMI-ish) or LifeCycle (CIM-ish) are being looked at as well in light of a new registry. So there isn't anything standard available at this point for Events. Right now, most OEMs are creating their own message registry and you are certainly welcome to do so. With luck, we might have an eventing approach available in early 2018.
|
|
|
Post by j2hilland on Dec 7, 2017 18:24:43 GMT
Answering your second question first, It is not OK to add anything into ServiceRoot as the Schema defines that no properties not in the schema can be added (unless you do it in OEM). Asking your first question again - do we even need to mention the sowrdfish version somewhere? The only reason RedfishVersion is in the root is because that's the version of the protocol spec. Schema's change and are in Odata.type. And so far, I don't know that we've found the need to even use RedfishVersion. Can someone explain why the version swordfish spec should be in the model?
|
|
|
Post by j2hilland on Nov 30, 2017 22:03:29 GMT
I don't think you can make cancelling a "shall" at this point. It might be a should as users will want to cancel tasks but there are some tasks that when started simply cannot be "undone". So you can support cancel but return an error on the cancel.
The spec already calls out the usage of the ALLOW header so when the async operation is returned, the ALLOW header having DELETE as a value would indicate to the client that the client could attempt to cancel this async operation (though again, an error may be returned). We probably ought to clarify the spec here to indicate that the ALLOW header shall be returned if the async operation can be cancelled and shall apply to either the Task Monitor or Task Resource.
Note that the document indicates that you can cancel either the Task Monitor or Task Resource. Cancelling the Task Monitor via DELETE will preserve the Task Resource so that someone can check on it later so would be the preferred semantic to support and this eliminates the need for an action. So we could clarify that paragraph.
We may have a hole w.r.t. the Task Monitor - if the client program terminates then the URI of the Task Monitor is lost. By adding a Task Monitor property to the Task Resource itself, this would allow another client to monitor the task and get the completion payload across errors. This implies sufficient permissions but shouldn't be a permissions/security concern otherwise.
Hope this helps.
|
|