Intel(R) Management Engine (ME) Client bus API ============================================== Rationale ========= MEI misc character device is useful for dedicated applications to send and receive data to the many FW appliance found in Intel's ME from the user space. However for some of the ME functionalities it make sense to leverage existing software stack and expose them through existing kernel subsystems. In order to plug seamlessly into the kernel device driver model we add kernel virtual bus abstraction on top of the MEI driver. This allows implementing linux kernel drivers for the various MEI features as a stand alone entities found in their respective subsystem. Existing device drivers can even potentially be re-used by adding an MEI CL bus layer to the existing code. MEI CL bus API ============== A driver implementation for an MEI Client is very similar to existing bus based device drivers. The driver registers itself as an MEI CL bus driver through the mei_cl_driver structure: struct mei_cl_driver { struct device_driver driver; const char *name; const struct mei_cl_device_id *id_table; int (*probe)(struct mei_cl_device *dev, const struct mei_cl_id *id); int (*remove)(struct mei_cl_device *dev); }; struct mei_cl_id { char name[MEI_NAME_SIZE]; kernel_ulong_t driver_info; }; The mei_cl_id structure allows the driver to bind itself against a device name. To actually register a driver on the ME Client bus one must call the mei_cl_add_driver() API. This is typically called at module init time. Once registered on the ME Client bus, a driver will typically try to do some I/O on this bus and this should be done through the mei_cl_send() and mei_cl_recv() routines. The latter is synchronous (blocks and sleeps until data shows up). In order for drivers to be notified of pending events waiting for them (e.g. an Rx event) they can register an event handler through the mei_cl_register_event_cb() routine. Currently only the MEI_EVENT_RX event will trigger an event handler call and the driver implementation is supposed to call mei_recv() from the event handler in order to fetch the pending received buffers. Example ======= As a theoretical example let's pretend the ME comes with a "contact" NFC IP. The driver init and exit routines for this device would look like: #define CONTACT_DRIVER_NAME "contact" static struct mei_cl_device_id contact_mei_cl_tbl[] = { { CONTACT_DRIVER_NAME, }, /* required last entry */ { } }; MODULE_DEVICE_TABLE(mei_cl, contact_mei_cl_tbl); static struct mei_cl_driver contact_driver = { .id_table = contact_mei_tbl, .name = CONTACT_DRIVER_NAME, .probe = contact_probe, .remove = contact_remove, }; static int contact_init(void) { int r; r = mei_cl_driver_register(&contact_driver); if (r) { pr_err(CONTACT_DRIVER_NAME ": driver registration failed\n"); return r; } return 0; } static void __exit contact_exit(void) { mei_cl_driver_unregister(&contact_driver); } module_init(contact_init); module_exit(contact_exit); And the driver's simplified probe routine would look like that: int contact_probe(struct mei_cl_device *dev, struct mei_cl_device_id *id) { struct contact_driver *contact; [...] mei_cl_enable_device(dev); mei_cl_register_event_cb(dev, contact_event_cb, contact); return 0; } In the probe routine the driver first enable the MEI device and then registers an ME bus event handler which is as close as it can get to registering a threaded IRQ handler. The handler implementation will typically call some I/O routine depending on the pending events: #define MAX_NFC_PAYLOAD 128 static void contact_event_cb(struct mei_cl_device *dev, u32 events, void *context) { struct contact_driver *contact = context; if (events & BIT(MEI_EVENT_RX)) { u8 payload[MAX_NFC_PAYLOAD]; int payload_size; payload_size = mei_recv(dev, payload, MAX_NFC_PAYLOAD); if (payload_size <= 0) return; /* Hook to the NFC subsystem */ nfc_hci_recv_frame(contact->hdev, payload, payload_size); } } oscript>
authorYuchung Cheng <ycheng@google.com>2017-01-27 16:24:39 -0800
committerDavid S. Miller <davem@davemloft.net>2017-01-29 19:17:23 -0500
commit678550c651aee051d66112933c87894f129b9355 (patch)
treeed24210650990895c35fbae14f84237f8c0fccd2
parent7e98102f489775d8c000884fca8a0d995ea688a9 (diff)
tcp: include locally failed retries in retransmission stats
Currently the retransmission stats are not incremented if the retransmit fails locally. But we always increment the other packet counters that track total packet/bytes sent. Awkwardly while we don't count these failed retransmits in RETRANSSEGS, we do count them in FAILEDRETRANS. If the qdisc is dropping many packets this could under-estimate TCP retransmission rate substantially from both SNMP or per-socket TCP_INFO stats. This patch changes this by always incrementing retransmission stats on retransmission attempts and failures. Another motivation is to properly track retransmists in SCM_TIMESTAMPING_OPT_STATS. Since SCM_TSTAMP_SCHED collection is triggered in tcp_transmit_skb(), If tp->total_retrans is incremented after the function, we'll always mis-count by the amount of the latest retransmission. Signed-off-by: Yuchung Cheng <ycheng@google.com> Signed-off-by: Soheil Hassas Yeganeh <soheil@google.com> Acked-by: Neal Cardwell <ncardwell@google.com> Acked-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Diffstat