|
Post by mwalton on Nov 13, 2017 12:25:51 GMT
I voted to reject to request as it could lead to ambiguity, and will also lead to different implementations for different devices, so you would not be able to rely on the behaviour being consistent.
|
|
|
Post by mwalton on Sept 1, 2017 7:41:22 GMT
Hi Jeff,
Thanks for the reply and for considering this for the next schema release. The proposal for dealing with OEM sensor types sounds good to me.
I agree that it would be better to spend time creating specific Redfish Message Registry styled events, since they can afford to be a bit more descriptive and can optionally remove the need for a lot of the string content from the service.
Thanks,
Mark
|
|
|
Post by mwalton on Aug 24, 2017 15:03:37 GMT
Hi All, I'm a developer working on Redfish for our BMC and I'm in the process of implementing the log service for the BMC which renders the SEL entries. The schema for LogEntry ( redfish.dmtf.org/schemas/v1/LogEntry.v1_2_0.json) specifies the enumerations for SensorType and LogEntryCode which map all of the generic sensor types and event types for IPMI. However the SensorType enumeration doesn't appear to allow for OEM types 0xC0 - 0xFF. Our system has a few OEM sensors, so if I translate the sensor type for SEL events for OEM sensors the Sensor Type field will fail JSON or CSDL schema validation as the value isn't in the supported list. Similarly, for the LogEntryCode enumeration, it doesn't appear to allow for 'Sensor Specific' event types (which are defined in the IPMI specification) or OEM event types, so again if I translate the events the entry will fail schema validation. The questions I have are as follows: 1. Is there a plan to extend these enumerations to cater for OEM sensor or events types, or add some other mechanism to allow them in the schema? 2. Since the 'Sensor Specific' events are defined in the IPMI specification, is there a plan to add support for at least these sensor specific events? 3. In lieu of points 1 and 2, is it reasonable to render the SensorType and LogEntryCode fields as NULL and instead use the OEM object (with reference to an OEM schema) to render the full range for our system, or is it more appropriate to render this into the Message/MessageId fields? Thanks, Mark
|
|
|
Post by mwalton on Aug 17, 2017 8:52:04 GMT
Apologies - I misread the title and thought the discussion was about @odata.nextLink!
|
|
|
Post by mwalton on Aug 16, 2017 13:18:19 GMT
I don't believe so. The intention of the @odata.nextLink field is to provide a mechanism to page resource collections that contain more data than can reasonably be returned in a single response. For example a System Event Log with 2000 entries or something of that nature. (I pick SEL as an example because LogService entries are meant to be expanded into the collection instead of linked via @odata.id, therefore a lot more data is rendered).
What the implementation might do in that circumstance is limit the number of resources that it will return in a collection (to 10, for example) and only return 10 of the entries in the collection instead of all 2000.
The collection's Members@odata.count field should still report the full number of resources in that collection, so @odata.nextLink is used to indicate how to read the next 10 entries (as without providing that, accesses to the URI will only ever return the first 10 entries).
So for example:
A GET on /redfish/v1/Managers/0/LogServices/SEL/Entries might return
{
"@odata.id": "/redfish/v1/Managers/0/LogServices/SEL/Entries",
"@odata.context": "/redfish/v1/$metadata#LogEntryCollection.LogEntryCollection",
"@odata.type": "#LogEntryCollection.LogEntryCollection",
"Description": "Log Entry Collection",
"Name": "SEL Log Entries",
"Members@odata.count": 2000,
"Members": [
{
... "Id": "0", /* Note that the ID is 0 */ ...
}, /* 10 times */
],
"Members@odata.nextLink": "/redfish/v1/Managers/0/LogServices/SEL/Entries?$skip=10"
} And then if you follow the Members@odata.nextLink URI (/redfish/v1/Managers/0/LogServices/SEL/Entries?$skip=10):
{
"@odata.id": "/redfish/v1/Managers/0/LogServices/SEL/Entries",
"@odata.context": "/redfish/v1/$metadata#LogEntryCollection.LogEntryCollection",
"@odata.type": "#LogEntryCollection.LogEntryCollection",
"Description": "Log Entry Collection",
"Name": "SEL Log Entries",
"Members@odata.count": 2000,
"Members": [
{
...
"Id": "10", /* Note that this ID is now 10 */ ...
}, /* 10 times */
],
"Members@odata.nextLink": "/redfish/v1/Managers/0/LogServices/SEL/Entries?$skip=20"
} You should be able to follow the @odata.nextLink all the way to the end of the collection, and you'll end up with the same data as you would if the resource was not paged.
I'm not a member of the DMTF or the working group, but this is my understanding of the Redfish specification (and OData in general).
Hope this helps!
Thanks,
Mark
|
|
|
Post by mwalton on Mar 7, 2017 12:49:48 GMT
Hi Max,
Most implementations should have a session inactivity timeout (part of the SessionService) that can be configured as required to ensure that inactive sessions get cleaned up. Obviously I understand that with your script/application that still may not be viable if there are multiple sessions already in flight, but this is what it appears the sessions were designed for.
I can't speak for other implementations, but again the SessionService provides another feature to stop brute forcing of passwords. The ability should exist to lock an account if multiple failed login attempts occurred, so I would expect that to be used rather than a forced delay. Again though, I cannot speak for other implementations.
Caching a password hash is quite insecure and runs the risk of a code bug exposing a plaintext password. It is certainly possible, but security comes at a cost, and in this case it would appear to be access time!
As above I'm not commenting on other implementations (or absolving them of any blame!) - just trying to provide my observations as an implementer that might help guide you.
Thanks,
Mark
|
|
|
Post by mwalton on Mar 1, 2017 17:28:44 GMT
Hi Max,
Have you considered using a Redfish session for your needs, when you have multiple requests?
If an implementation is handling passwords correctly, then it should be hashing the password using a secure hashing function (something like SHA256 or SHA512) to compare against a stored password for the user, which for fairly low powered CPUs such as BMCs are quite time intensive tasks.
With a session, this hashing operation is only performed once (on session creation) and then subsequent requests just use the session auth token, which is a relatively simple string compare, and therefore much quicker to process.
Also, another fairly time intensive task for a low powered CPU is the negotiation of an HTTPS connection. If you aren't already, you could consider using a persistent connection. Using curl for single shot connections wont be doing this, so your HTTPS connection will need to be re-established for every request.
The slow down when accessing the /redfish/v1 is not strictly necessary, as the authentication information doesn't *need* to be parsed, but I suspect the implementation is parsing any auth data for all requests, regardless of the URI. I have to admit that my implementation is doing this too.
Hope this helps,
Mark
|
|