This document summarizes a Q&A session on WebRTC standards and implementation. The session focused on progress towards making the WebRTC specification a Candidate Recommendation, including reducing open issues. The goal is Candidate Recommendation in early May, followed by further testing and implementation work needed before the specification can be finalized, which may take until 2019 to complete due to the complexity and number of stakeholders involved.
The document discusses the Next Generation Leaders Programme by the Internet Society which aims to identify and train emerging leaders aged 20-40 who can work across the fields of policy, technology, business and education to address challenges facing the future of the Internet. The programme blends coursework with hands-on experience at IETF meetings and other events. It helps participants develop skills in technology, policy, business and diplomacy to tackle issues around the open development, evolution and use of the Internet.
Daniel Stenberg gave a presentation on the current status of HTTP/2. He discussed how HTTP usage has grown significantly, leading to slower page loads. HTTP/1.1 workarounds like concatenation and sharding add complexity. HTTP/2 aims to address these issues through features like multiplexed streams, header compression, and server push while maintaining backwards compatibility. Major browsers now support HTTP/2, but it currently only makes up a small percentage of traffic. Widespread adoption will take time as developers adjust practices.
This document provides instructions for researching RFCs (Request for Comments) through the RFC Editor website. It describes how RFCs started as memos documenting the development of ARPANET and are now managed by IETF. It outlines steps to search for RFCs by keyword, status, and humorous RFCs. It also describes how anyone can propose an RFC by submitting an Internet-Draft, which undergoes review before possible publication.
How it works internet standards setting icann53ICANN
The document provides an overview of the Internet Engineering Task Force (IETF). It describes the IETF as an open standards organization made up of volunteers who work to develop standards for the Internet through a rough consensus process. It outlines the structure and processes of the IETF, including its working groups, leadership groups, standards approval process, and relationship to other organizations like the Internet Society and Internet Assigned Numbers Authority.
Siphon - Near Real Time Databus Using Kafka, Eric Boyd, Nitin Kumarconfluent
Siphon is a highly available and reliable distributed pub/sub system built using Apache Kafka. It is used to publish, discover and subscribe to near real-time data streams for operational and product intelligence. Siphon is used as a “Databus” by a variety of producers and subscribers in Microsoft, and is compliant with security and privacy requirements. It has a built-in Auditing and Quality control. This session will provide an overview of the use of Kafka at Microsoft, and then deep dive into Siphon. We will describe an important business scenario and talk about the technical details of the system in the context of that scenario. We will also cover the design and implementation of the service, the scale, and real world production experiences from operating the service in the Microsoft cloud environment.
Unified Framework for Real Time, Near Real Time and Offline Analysis of Video...Spark Summit
The document discusses Conviva's Unified Framework (CUF) for analyzing video streaming data in real-time, near real-time, and offline using Spark and Databricks. It summarizes Conviva's platform for measuring video quality of experience across devices and networks. The framework unifies the three analysis stacks onto Spark to share code and insights. Using Databricks improves the offline analysis speed and enables data scientists to independently explore large datasets and build machine learning models.
This document summarizes a Q&A session on WebRTC standards and implementation. The session focused on progress towards making the WebRTC specification a Candidate Recommendation, including reducing open issues. The goal is Candidate Recommendation in early May, followed by further testing and implementation work needed before the specification can be finalized, which may take until 2019 to complete due to the complexity and number of stakeholders involved.
The document discusses the Next Generation Leaders Programme by the Internet Society which aims to identify and train emerging leaders aged 20-40 who can work across the fields of policy, technology, business and education to address challenges facing the future of the Internet. The programme blends coursework with hands-on experience at IETF meetings and other events. It helps participants develop skills in technology, policy, business and diplomacy to tackle issues around the open development, evolution and use of the Internet.
Daniel Stenberg gave a presentation on the current status of HTTP/2. He discussed how HTTP usage has grown significantly, leading to slower page loads. HTTP/1.1 workarounds like concatenation and sharding add complexity. HTTP/2 aims to address these issues through features like multiplexed streams, header compression, and server push while maintaining backwards compatibility. Major browsers now support HTTP/2, but it currently only makes up a small percentage of traffic. Widespread adoption will take time as developers adjust practices.
This document provides instructions for researching RFCs (Request for Comments) through the RFC Editor website. It describes how RFCs started as memos documenting the development of ARPANET and are now managed by IETF. It outlines steps to search for RFCs by keyword, status, and humorous RFCs. It also describes how anyone can propose an RFC by submitting an Internet-Draft, which undergoes review before possible publication.
How it works internet standards setting icann53ICANN
The document provides an overview of the Internet Engineering Task Force (IETF). It describes the IETF as an open standards organization made up of volunteers who work to develop standards for the Internet through a rough consensus process. It outlines the structure and processes of the IETF, including its working groups, leadership groups, standards approval process, and relationship to other organizations like the Internet Society and Internet Assigned Numbers Authority.
Siphon - Near Real Time Databus Using Kafka, Eric Boyd, Nitin Kumarconfluent
Siphon is a highly available and reliable distributed pub/sub system built using Apache Kafka. It is used to publish, discover and subscribe to near real-time data streams for operational and product intelligence. Siphon is used as a “Databus” by a variety of producers and subscribers in Microsoft, and is compliant with security and privacy requirements. It has a built-in Auditing and Quality control. This session will provide an overview of the use of Kafka at Microsoft, and then deep dive into Siphon. We will describe an important business scenario and talk about the technical details of the system in the context of that scenario. We will also cover the design and implementation of the service, the scale, and real world production experiences from operating the service in the Microsoft cloud environment.
Unified Framework for Real Time, Near Real Time and Offline Analysis of Video...Spark Summit
The document discusses Conviva's Unified Framework (CUF) for analyzing video streaming data in real-time, near real-time, and offline using Spark and Databricks. It summarizes Conviva's platform for measuring video quality of experience across devices and networks. The framework unifies the three analysis stacks onto Spark to share code and insights. Using Databricks improves the offline analysis speed and enables data scientists to independently explore large datasets and build machine learning models.
This document discusses factors that influence developers to contribute to discussions in open source software projects.
It builds models to predict whether developers will contribute to discussion threads with up to 89% accuracy. The most important factors are the developer's previous contribution activity, the length and content of the thread, and how much knowledge the developer has about the topic.
Developers are more likely to contribute when they have relevant knowledge, see a lack of responses from others, and have time available. Models can help identify threads most needing a developer's contribution, but marking a thread as such does not mean it shouldn't still be read.
This document summarizes an update on IPv6 activity in CERNET2 that was presented on March 5, 2015. It discusses that CERNET2 has had a pure IPv6 backbone since 2003 connecting over 600 universities. IPv6 related research and experiments are conducted on CERNET2. Traffic statistics from January 2015 show backbone traffic exceeding 40Gbps and 10Gbps in some locations. The document also discusses challenges with scaling the DNS root server system and efforts to address this through techniques like anycasting and expanding the number of root server operators.
Cluster computing frameworks such as Hadoop or Spark are tremendously beneficial in processing and deriving insights from data. However, long query latencies make these frameworks sub-optimal choices to power interactive applications. Organizations frequently rely on dedicated query layers, such as relational databases and key/value stores, for faster query latencies, but these technologies suffer many drawbacks for analytic use cases. In this session, we discuss using Druid for analytics and why the architecture is well suited to power analytic applications.
User-facing applications are replacing traditional reporting interfaces as the preferred means for organizations to derive value from their datasets. In order to provide an interactive user experience, user interactions with analytic applications must complete in an order of milliseconds. To meet these needs, organizations often struggle with selecting a proper serving layer. Many serving layers are selected because of their general popularity without understanding the possible architecture limitations.
Druid is an analytics data store designed for analytic (OLAP) queries on event data. It draws inspiration from Google’s Dremel, Google’s PowerDrill, and search infrastructure. Many enterprises are switching to Druid for analytics, and we will cover why the technology is a good fit for its intended use cases.
Speaker
Nishant Bangarwa, Software Engineer, Hortonworks
【EPN Seminar Nov.10. 2015】 Key note – Open innovation and Engineering communityシスコシステムズ合同会社
The document summarizes how the IETF process works to develop Internet standards and protocols. It discusses that the IETF is a community that works to describe and solve issues on the Internet, often publishing documents called RFCs. It provides examples of how technologies like Secure Shell and TCP congestion control were developed through discussions, Internet-Drafts, and published RFCs. It emphasizes that the IETF process relies on open discussion and consensus among participants to select solutions that work in practice for standards.
Interactive real-time dashboards on data streams using Kafka, Druid, and Supe...DataWorks Summit
When interacting with analytics dashboards, in order to achieve a smooth user experience, two major key requirements are quick response time and data freshness. To meet the requirements of creating fast interactive BI dashboards over streaming data, organizations often struggle with selecting a proper serving layer.
Cluster computing frameworks such as Hadoop or Spark work well for storing large volumes of data, although they are not optimized for making it available for queries in real time. Long query latencies also make these systems suboptimal choices for powering interactive dashboards and BI use cases.
This talk presents an open source real-time data analytics stack using Apache Kafka, Druid, and Superset. The stack combines the low-latency streaming and processing capabilities of Kafka with Druid, which enables immediate exploration and provides low-latency queries over the ingested data streams. Superset provides the visualization and dashboarding that integrates nicely with Druid. In this talk we will discuss why this architecture is well suited to interactive applications over streaming data, present an end-to-end demo of complete stack, discuss its key features, and discuss performance characteristics from real-world use cases. NISHANT BANGARWA, Software engineer, Hortonworks
AusNOG 2015 - Why you should read RFCs and Internet Drafts (and what you need...Mark Smith
This document discusses Request for Comments (RFCs) and Internet Drafts (IDs), which are documents published by the Internet Engineering Task Force (IETF) that specify and describe Internet standards and proposed standards. It explains that RFCs document standards and operational practices, while IDs are works in progress that may eventually become RFCs. It provides reasons for reading these documents, such as learning how protocols are supposed to work, finding related information, and helping to improve specifications. It also outlines the different types of RFCs and publication process for IDs, and where to find these documents.
Enhancing Performance with Globus and the Science DMZGlobus
ESnet has led the way in helping national facilities—and many other institutions in the research community—configure Science DMZs and troubleshoot network issues to maximize data transfer performance. In this talk we will present a summary of approaches and tips for getting the most out of your network infrastructure using Globus Connect Server.
Interactive real time dashboards on data streams using Kafka, Druid, and Supe...DataWorks Summit
When interacting with analytics dashboards, in order to achieve a smooth user experience, two major key requirements are quick response time and data freshness. To meet the requirements of creating fast interactive BI dashboards over streaming data, organizations often struggle with selecting a proper serving layer.
Cluster computing frameworks such as Hadoop or Spark work well for storing large volumes of data, although they are not optimized for making it available for queries in real time. Long query latencies also make these systems suboptimal choices for powering interactive dashboards and BI use cases.
This talk presents an open source real time data analytics stack using Apache Kafka, Druid, and Superset. The stack combines the low-latency streaming and processing capabilities of Kafka with Druid, which enables immediate exploration and provides low-latency queries over the ingested data streams. Superset provides the visualization and dashboarding that integrates nicely with Druid. In this talk we will discuss why this architecture is well suited to interactive applications over streaming data, present an end-to-end demo of complete stack, discuss its key features, and discuss performance characteristics from real-world use cases.
Speaker
Nishant Bangarwa, Software Engineer, Hortonworks
- The document describes a media player plugin that allows choosing between IPv4 and IPv6 protocols for streaming video chunks to determine the faster connection speed.
- The plugin modifies an existing media player (Hls.js) to measure download speeds for each video chunk delivered over IPv4 or IPv6, and then selects the preferable protocol.
- Statistics from over 950 streaming sessions in Japan show IPv6 speeds are generally faster than IPv4, especially during night hours, though IPv4 can be faster in some cases like with older IPv6 tunneling.
This document provides an overview of bandwidth estimation in the Janus WebRTC server. It discusses:
- The importance of bandwidth estimation and congestion control for real-time media like WebRTC.
- Challenges in applying existing bandwidth estimation algorithms designed for endpoints (like GCC) to servers that don't generate their own media.
- An approach taken in Janus to develop a simpler, ad-hoc bandwidth estimation technique for servers based on acknowledged rate, losses, and delays - without relying on existing complex standards-track algorithms.
Creating Open Data with Open Source (beta2)Sammy Fung
The document discusses creating open data using open source tools. It provides an overview of open data and Tim Berners-Lee's 5 star deployment scheme for open data. The author then describes using Python and the Scrapy framework to crawl websites and extract structured data to create open datasets. Specific examples discussed are the WeatherHK and TCTrack projects, which extract weather data from government websites. The author also proposes the hk0weather open source project to convert Hong Kong weather data into JSON format. The goal is to make more government data openly available in reusable, machine-readable formats.
The document discusses Request for Comments (RFCs) and the process for developing Internet standards. It explains that RFCs are technical documents that describe Internet protocols and procedures. RFCs can be proposed by engineers and adopted as Internet standards if they are widely implemented and supported. The standards process involves submitting drafts to the IETF that progress through stages from experimental to proposed standard to Internet standard. Standards aim for high quality, prior implementation, openness, and timeliness.
WebRTC Standards & Implementation Q&A - The Future is Now2!Amir Zmora
This session is in continuation of the previous one with a similar title. On this session the focus was on:
WebRTC 1.0 stuff - Content hints to browser and screen sharing issues + suggestions.
Beyond WebRTC 1.0 - New charter update, What developers want (looking at developer surveys), SDP (deprecation),
QUIC vs. RTP and two main proposals for extensions to the standard.
Meetup #3: Migrating an Oracle Application from on-premise to AWSAWS Vietnam Community
The document summarizes a case study of migrating an Oracle application from on-premise to AWS. It describes the existing on-premise architecture including hardware, software, network/security configuration, and disaster recovery requirements. It then discusses challenges of meeting the recovery time and point objectives on AWS. The first proposed solution is outlined along with its problems. Finally, the document proposes an improved solution on AWS and estimates it can save 70% on infrastructure costs.
WebRTC Webinar and Q&A - IP Address Privacy and Microsoft Edge InteroperabilityAmir Zmora
WebRTC webinar explaining what was all the hype around IP address privacy in WebRTC, what are the risks and how WebRTC is handling them. Webinar also talks about WebRTC browser interoperability and specifically interoperability with Microsoft Edge.
Webinar is part of the monthly WebRTC live Q&A sessions by Alex Gouailard, Dan Burnett and Amir Zmora
This document discusses efforts to consolidate best current operational practices (BCOPs) across the network operator community. It outlines the problems with existing sources of operational guidance being scattered and outdated. The proposed solution is a standardized BCOP development process to create a searchable repository of vetted guidance documents. So far some initial documents have been written and the process is being socialized at operator conferences to expand participation and the document library. The goal is to hand the effort off to the Internet Society to establish it as an ongoing program.
Whats so special about 512?, by Geoff Huston [APNIC 38 / APOPS 3]APNIC
The document discusses how the growth of routing tables poses challenges for routers. It summarizes BGP routing statistics that show IPv4 routing tables growing at around 10% per year while IPv6 tables grow faster at 20-40% per year. While overall growth is manageable now, projections estimate IPv4 tables could reach 1 million entries by 2024. Router memory technologies like TCAM have limitations in capacity, cost and power that may be strained by future growth. Memory and processing speeds will also need to improve to sustain higher link speeds, potentially requiring changes to routing protocols or packet formats. In summary, routing table and traffic growth trends pose technical challenges for router scaling that may require innovative solutions if unchecked.
The document discusses the benefits for researchers to participate in Internet standards bodies like the IETF. It notes that researchers can learn about real-world problems from network operators, vendors and others involved in standards. While standards work has a different focus than academic research, participating allows researchers to directly impact the development of the Internet. The document outlines the structure and processes of the IETF, from initiating new work to moving proposals through working groups to publication. It encourages researchers to get involved to collaborate with others and help build the Internet, while also gaining potential career and funding opportunities.
The document discusses the World IPv6 Launch event scheduled for June 6, 2012. It notes that IPv4 addresses are exhausted, IPv6 is the replacement standard that has been available for over 15 years, and the 2012 event aims to fully transition the internet to IPv6 without the ability to rollback to prevent future growth issues due to IPv4 exhaustion. Major internet organizations are participating to ensure all content and services are fully accessible over IPv6.
APNIC Foundation, presented by Ellisha Heppner at the PNG DNS Forum 2024APNIC
Ellisha Heppner, Grant Management Lead, presented an update on APNIC Foundation to the PNG DNS Forum held from 6 to 10 May, 2024 in Port Moresby, Papua New Guinea.
Registry Data Accuracy Improvements, presented by Chimi Dorji at SANOG 41 / I...APNIC
Chimi Dorji, Internet Resource Analyst at APNIC, presented on Registry Data Accuracy Improvements at SANOG 41 jointly held with INNOG 7 in Mumbai, India from 25 to 30 April 2024.
Contenu connexe
Similaire à Making an RFC in Today's IETF, presented by Geoff Huston at IETF 119
This document discusses factors that influence developers to contribute to discussions in open source software projects.
It builds models to predict whether developers will contribute to discussion threads with up to 89% accuracy. The most important factors are the developer's previous contribution activity, the length and content of the thread, and how much knowledge the developer has about the topic.
Developers are more likely to contribute when they have relevant knowledge, see a lack of responses from others, and have time available. Models can help identify threads most needing a developer's contribution, but marking a thread as such does not mean it shouldn't still be read.
This document summarizes an update on IPv6 activity in CERNET2 that was presented on March 5, 2015. It discusses that CERNET2 has had a pure IPv6 backbone since 2003 connecting over 600 universities. IPv6 related research and experiments are conducted on CERNET2. Traffic statistics from January 2015 show backbone traffic exceeding 40Gbps and 10Gbps in some locations. The document also discusses challenges with scaling the DNS root server system and efforts to address this through techniques like anycasting and expanding the number of root server operators.
Cluster computing frameworks such as Hadoop or Spark are tremendously beneficial in processing and deriving insights from data. However, long query latencies make these frameworks sub-optimal choices to power interactive applications. Organizations frequently rely on dedicated query layers, such as relational databases and key/value stores, for faster query latencies, but these technologies suffer many drawbacks for analytic use cases. In this session, we discuss using Druid for analytics and why the architecture is well suited to power analytic applications.
User-facing applications are replacing traditional reporting interfaces as the preferred means for organizations to derive value from their datasets. In order to provide an interactive user experience, user interactions with analytic applications must complete in an order of milliseconds. To meet these needs, organizations often struggle with selecting a proper serving layer. Many serving layers are selected because of their general popularity without understanding the possible architecture limitations.
Druid is an analytics data store designed for analytic (OLAP) queries on event data. It draws inspiration from Google’s Dremel, Google’s PowerDrill, and search infrastructure. Many enterprises are switching to Druid for analytics, and we will cover why the technology is a good fit for its intended use cases.
Speaker
Nishant Bangarwa, Software Engineer, Hortonworks
【EPN Seminar Nov.10. 2015】 Key note – Open innovation and Engineering communityシスコシステムズ合同会社
The document summarizes how the IETF process works to develop Internet standards and protocols. It discusses that the IETF is a community that works to describe and solve issues on the Internet, often publishing documents called RFCs. It provides examples of how technologies like Secure Shell and TCP congestion control were developed through discussions, Internet-Drafts, and published RFCs. It emphasizes that the IETF process relies on open discussion and consensus among participants to select solutions that work in practice for standards.
Interactive real-time dashboards on data streams using Kafka, Druid, and Supe...DataWorks Summit
When interacting with analytics dashboards, in order to achieve a smooth user experience, two major key requirements are quick response time and data freshness. To meet the requirements of creating fast interactive BI dashboards over streaming data, organizations often struggle with selecting a proper serving layer.
Cluster computing frameworks such as Hadoop or Spark work well for storing large volumes of data, although they are not optimized for making it available for queries in real time. Long query latencies also make these systems suboptimal choices for powering interactive dashboards and BI use cases.
This talk presents an open source real-time data analytics stack using Apache Kafka, Druid, and Superset. The stack combines the low-latency streaming and processing capabilities of Kafka with Druid, which enables immediate exploration and provides low-latency queries over the ingested data streams. Superset provides the visualization and dashboarding that integrates nicely with Druid. In this talk we will discuss why this architecture is well suited to interactive applications over streaming data, present an end-to-end demo of complete stack, discuss its key features, and discuss performance characteristics from real-world use cases. NISHANT BANGARWA, Software engineer, Hortonworks
AusNOG 2015 - Why you should read RFCs and Internet Drafts (and what you need...Mark Smith
This document discusses Request for Comments (RFCs) and Internet Drafts (IDs), which are documents published by the Internet Engineering Task Force (IETF) that specify and describe Internet standards and proposed standards. It explains that RFCs document standards and operational practices, while IDs are works in progress that may eventually become RFCs. It provides reasons for reading these documents, such as learning how protocols are supposed to work, finding related information, and helping to improve specifications. It also outlines the different types of RFCs and publication process for IDs, and where to find these documents.
Enhancing Performance with Globus and the Science DMZGlobus
ESnet has led the way in helping national facilities—and many other institutions in the research community—configure Science DMZs and troubleshoot network issues to maximize data transfer performance. In this talk we will present a summary of approaches and tips for getting the most out of your network infrastructure using Globus Connect Server.
Interactive real time dashboards on data streams using Kafka, Druid, and Supe...DataWorks Summit
When interacting with analytics dashboards, in order to achieve a smooth user experience, two major key requirements are quick response time and data freshness. To meet the requirements of creating fast interactive BI dashboards over streaming data, organizations often struggle with selecting a proper serving layer.
Cluster computing frameworks such as Hadoop or Spark work well for storing large volumes of data, although they are not optimized for making it available for queries in real time. Long query latencies also make these systems suboptimal choices for powering interactive dashboards and BI use cases.
This talk presents an open source real time data analytics stack using Apache Kafka, Druid, and Superset. The stack combines the low-latency streaming and processing capabilities of Kafka with Druid, which enables immediate exploration and provides low-latency queries over the ingested data streams. Superset provides the visualization and dashboarding that integrates nicely with Druid. In this talk we will discuss why this architecture is well suited to interactive applications over streaming data, present an end-to-end demo of complete stack, discuss its key features, and discuss performance characteristics from real-world use cases.
Speaker
Nishant Bangarwa, Software Engineer, Hortonworks
- The document describes a media player plugin that allows choosing between IPv4 and IPv6 protocols for streaming video chunks to determine the faster connection speed.
- The plugin modifies an existing media player (Hls.js) to measure download speeds for each video chunk delivered over IPv4 or IPv6, and then selects the preferable protocol.
- Statistics from over 950 streaming sessions in Japan show IPv6 speeds are generally faster than IPv4, especially during night hours, though IPv4 can be faster in some cases like with older IPv6 tunneling.
This document provides an overview of bandwidth estimation in the Janus WebRTC server. It discusses:
- The importance of bandwidth estimation and congestion control for real-time media like WebRTC.
- Challenges in applying existing bandwidth estimation algorithms designed for endpoints (like GCC) to servers that don't generate their own media.
- An approach taken in Janus to develop a simpler, ad-hoc bandwidth estimation technique for servers based on acknowledged rate, losses, and delays - without relying on existing complex standards-track algorithms.
Creating Open Data with Open Source (beta2)Sammy Fung
The document discusses creating open data using open source tools. It provides an overview of open data and Tim Berners-Lee's 5 star deployment scheme for open data. The author then describes using Python and the Scrapy framework to crawl websites and extract structured data to create open datasets. Specific examples discussed are the WeatherHK and TCTrack projects, which extract weather data from government websites. The author also proposes the hk0weather open source project to convert Hong Kong weather data into JSON format. The goal is to make more government data openly available in reusable, machine-readable formats.
The document discusses Request for Comments (RFCs) and the process for developing Internet standards. It explains that RFCs are technical documents that describe Internet protocols and procedures. RFCs can be proposed by engineers and adopted as Internet standards if they are widely implemented and supported. The standards process involves submitting drafts to the IETF that progress through stages from experimental to proposed standard to Internet standard. Standards aim for high quality, prior implementation, openness, and timeliness.
WebRTC Standards & Implementation Q&A - The Future is Now2!Amir Zmora
This session is in continuation of the previous one with a similar title. On this session the focus was on:
WebRTC 1.0 stuff - Content hints to browser and screen sharing issues + suggestions.
Beyond WebRTC 1.0 - New charter update, What developers want (looking at developer surveys), SDP (deprecation),
QUIC vs. RTP and two main proposals for extensions to the standard.
Meetup #3: Migrating an Oracle Application from on-premise to AWSAWS Vietnam Community
The document summarizes a case study of migrating an Oracle application from on-premise to AWS. It describes the existing on-premise architecture including hardware, software, network/security configuration, and disaster recovery requirements. It then discusses challenges of meeting the recovery time and point objectives on AWS. The first proposed solution is outlined along with its problems. Finally, the document proposes an improved solution on AWS and estimates it can save 70% on infrastructure costs.
WebRTC Webinar and Q&A - IP Address Privacy and Microsoft Edge InteroperabilityAmir Zmora
WebRTC webinar explaining what was all the hype around IP address privacy in WebRTC, what are the risks and how WebRTC is handling them. Webinar also talks about WebRTC browser interoperability and specifically interoperability with Microsoft Edge.
Webinar is part of the monthly WebRTC live Q&A sessions by Alex Gouailard, Dan Burnett and Amir Zmora
This document discusses efforts to consolidate best current operational practices (BCOPs) across the network operator community. It outlines the problems with existing sources of operational guidance being scattered and outdated. The proposed solution is a standardized BCOP development process to create a searchable repository of vetted guidance documents. So far some initial documents have been written and the process is being socialized at operator conferences to expand participation and the document library. The goal is to hand the effort off to the Internet Society to establish it as an ongoing program.
Whats so special about 512?, by Geoff Huston [APNIC 38 / APOPS 3]APNIC
The document discusses how the growth of routing tables poses challenges for routers. It summarizes BGP routing statistics that show IPv4 routing tables growing at around 10% per year while IPv6 tables grow faster at 20-40% per year. While overall growth is manageable now, projections estimate IPv4 tables could reach 1 million entries by 2024. Router memory technologies like TCAM have limitations in capacity, cost and power that may be strained by future growth. Memory and processing speeds will also need to improve to sustain higher link speeds, potentially requiring changes to routing protocols or packet formats. In summary, routing table and traffic growth trends pose technical challenges for router scaling that may require innovative solutions if unchecked.
The document discusses the benefits for researchers to participate in Internet standards bodies like the IETF. It notes that researchers can learn about real-world problems from network operators, vendors and others involved in standards. While standards work has a different focus than academic research, participating allows researchers to directly impact the development of the Internet. The document outlines the structure and processes of the IETF, from initiating new work to moving proposals through working groups to publication. It encourages researchers to get involved to collaborate with others and help build the Internet, while also gaining potential career and funding opportunities.
The document discusses the World IPv6 Launch event scheduled for June 6, 2012. It notes that IPv4 addresses are exhausted, IPv6 is the replacement standard that has been available for over 15 years, and the 2012 event aims to fully transition the internet to IPv6 without the ability to rollback to prevent future growth issues due to IPv4 exhaustion. Major internet organizations are participating to ensure all content and services are fully accessible over IPv6.
Similaire à Making an RFC in Today's IETF, presented by Geoff Huston at IETF 119 (20)
APNIC Foundation, presented by Ellisha Heppner at the PNG DNS Forum 2024APNIC
Ellisha Heppner, Grant Management Lead, presented an update on APNIC Foundation to the PNG DNS Forum held from 6 to 10 May, 2024 in Port Moresby, Papua New Guinea.
Registry Data Accuracy Improvements, presented by Chimi Dorji at SANOG 41 / I...APNIC
Chimi Dorji, Internet Resource Analyst at APNIC, presented on Registry Data Accuracy Improvements at SANOG 41 jointly held with INNOG 7 in Mumbai, India from 25 to 30 April 2024.
APNIC Policy Roundup, presented by Sunny Chendi at the 5th ICANN APAC-TWNIC E...APNIC
Sunny Chendi, Senior Advisor, Membership and Policy at APNIC, presents 'APNIC Policy Roundup' at the 5th ICANN APAC-TWNIC Engagement Forum and 41st TWNIC OPM in Taipei, Taiwan from 23 to 24 April.
DDoS In Oceania and the Pacific, presented by Dave Phelan at NZNOG 2024APNIC
Dave Phelan, Senior Network Analyst/Technical Trainer at APNIC, presents 'DDoS In Oceania and the Pacific' at NZNOG 2024 held in Nelson, New Zealand from 8 to 12 April 2024.
'Future Evolution of the Internet' delivered by Geoff Huston at Everything Op...APNIC
Geoff Huston, Chief Scientist at APNIC deliver keynote presentation on the 'Future Evolution of the Internet' at the Everything Open 2024 conference in Gladstone, Australia from 16 to 18 April 2024.
IP addressing and IPv6, presented by Paul Wilson at IETF 119APNIC
Paul Wilson, Director General of APNIC delivers a presentation on IP addressing and IPv6 to the Policymakers Program during IETF 119 in Brisbane Australia from 16 to 22 March 2024.
draft-harrison-sidrops-manifest-number-01, presented at IETF 119APNIC
Tom Harrison, Product and Delivery Manager at APNIC presents at the Registration Protocols Extensions working group during IETF 119 in Brisbane, Australia from 16-22 March 2024
Benefits of doing Internet peering and running an Internet Exchange (IX) pres...APNIC
Che-Hoo Cheng, Senior Director, Development at APNIC presents on the "Benefits of doing Internet peering and running an Internet Exchange (IX)" at the Communications Regulatory Commission of Mongolia's IPv6, IXP, Datacenter - Policy and Regulation International Trends Forum in Ulaanbaatar, Mongolia on 7 March 2024
APNIC Update and RIR Policies for ccTLDs, presented at APTLD 85APNIC
APNIC Senior Advisor, Membership and Policy, Sunny Chendi presented on APNIC updates and RIR Policies for ccTLDs at APTLD 85 in Goa, India from 19-22 February 2024.
Lao Digital Week 2024: It's time to deploy IPv6APNIC
APNIC Development Director Che-Hoo Cheng presents on the importance of deploying IPv6 at the Lao Digital Week 2024, held in Vientiane, Lao PDR from 10 to 14 January 2024.
HijackLoader Evolution: Interactive Process HollowingDonato Onofri
CrowdStrike researchers have identified a HijackLoader (aka IDAT Loader) sample that employs sophisticated evasion techniques to enhance the complexity of the threat. HijackLoader, an increasingly popular tool among adversaries for deploying additional payloads and tooling, continues to evolve as its developers experiment and enhance its capabilities.
In their analysis of a recent HijackLoader sample, CrowdStrike researchers discovered new techniques designed to increase the defense evasion capabilities of the loader. The malware developer used a standard process hollowing technique coupled with an additional trigger that was activated by the parent process writing to a pipe. This new approach, called "Interactive Process Hollowing", has the potential to make defense evasion stealthier.
Discover the benefits of outsourcing SEO to Indiadavidjhones387
"Discover the benefits of outsourcing SEO to India! From cost-effective services and expert professionals to round-the-clock work advantages, learn how your business can achieve digital success with Indian SEO solutions.
Ready to Unlock the Power of Blockchain!Toptal Tech
Imagine a world where data flows freely, yet remains secure. A world where trust is built into the fabric of every transaction. This is the promise of blockchain, a revolutionary technology poised to reshape our digital landscape.
Toptal Tech is at the forefront of this innovation, connecting you with the brightest minds in blockchain development. Together, we can unlock the potential of this transformative technology, building a future of transparency, security, and endless possibilities.
Gen Z and the marketplaces - let's translate their needsLaura Szabó
The product workshop focused on exploring the requirements of Generation Z in relation to marketplace dynamics. We delved into their specific needs, examined the specifics in their shopping preferences, and analyzed their preferred methods for accessing information and making purchases within a marketplace. Through the study of real-life cases , we tried to gain valuable insights into enhancing the marketplace experience for Generation Z.
The workshop was held on the DMA Conference in Vienna June 2024.
2. Just how long…
• Does it take to produce an RFC these days?
• Is it taking longer than before? Or is the process getting faster?
• What’s the “success rate” for Internet drafts?
3. Internet Drafts
February 2024 status:
• Current drafts in the Internet-Drafts repository: 2,185
• Average version count: 6
• Average time in the repository: 825 days (2 Years, 4 Months)
8. Drafts per Month
The draft submission rate has been
steady at some 350 drafts per
month (outside of IETF meeting
months) since ~2012
9. Drafts per Month
The -00 draft submission rate has
been steady at some 50-80 -00
drafts per month (outside of IETF
meeting months) since ~2003
There was a small decline in the
COVID period, but it appears to be
rising again in the past 12 months
10. RFCs
• Internet Drafts since 1 June 1989 : 39,719 (watersprings.org)
• RFCs since April 1969: 9,338. (8,225 since 1 June 1989)
• Draft to RFC “conversion rate”: 21%
• Obsoleted RFCs: 1,130
• Updated RFCs: 1,168
• Verified Errata: 1,681
15. Just how long…
• Does it take to produce an RFC these days?
6 – 8 years
• Is it taking longer than before? Or is the process getting faster?
It’s taking around 4x longer than 20 years ago!
• What’s the “success rate” for Internet drafts?
Around 20%, or 1 in 5
16. See also
RFC8963 - Evaluation of a Sample of RFCs Produced in 2018
Christian Huitema, published Jan 2021