Introduction
The increasing prevalence of AI-powered tools has redefined software development. Like many others, we wanted to adopt these tools to enhance productivity and keep up with an evolving landscape. However, the integration of codebases, cloud services, and unsolicited telemetry raises important concerns about user privacy, data security and intellectual property protection.
This report seeks to answer some of our, and our customers’, questions for seven of the most popular agentic code editors and plugins. By intercepting and analysing their network flows across a set of standardised tasks, we aim to gain insight into the behaviour, privacy implications, and telemetry patterns of these tools in real-world scenarios.
Incid…
Introduction
The increasing prevalence of AI-powered tools has redefined software development. Like many others, we wanted to adopt these tools to enhance productivity and keep up with an evolving landscape. However, the integration of codebases, cloud services, and unsolicited telemetry raises important concerns about user privacy, data security and intellectual property protection.
This report seeks to answer some of our, and our customers’, questions for seven of the most popular agentic code editors and plugins. By intercepting and analysing their network flows across a set of standardised tasks, we aim to gain insight into the behaviour, privacy implications, and telemetry patterns of these tools in real-world scenarios.
Incidentally, a side-effect was running into OWASP LLM07:2025 System Prompt Leakage for three of the chosen coding agents. You can see the system prompts in the appendix.
Armed with the knowledge, we are able to design a safe operational environment for these tools should a configuration setting go amiss, prompt injection occurs, or a latent backdoor in a large language model is triggered.
The research was carried out between July and September ’25 by Lucas Pye under direction and guidance from senior engineers in the organisation.
tl;dr IOCs
Not so much Indicators of Compromise but Indicators of Coding Agents’ use in your network. These could be used to only allow approved agents, monitor for shadow IT usage, disallow unapproved ones, or only disallow telemetry endpoints where possible.
Click here to jump to the IOCs section.
Setup
We used a transparent mitmproxy setup to intercept and decrypt traffic, and avoided having to change editors’ proxy settings. Fortunately, we discovered quite early on in the exercise that proxy settings weren’t being picked up by all components of an integrated agentic editor uniformly – for example, the marketplace extensions subsystem or the agent wouldn’t respect them. The setup involved three separate processes running in their respective Firejail to create a network-restricted sandbox. These jails were connected to each other and the internet as pictured below:

The three jails were linked by a virtual bridge br0. Both the Proxy and DNS Resolver jails could access the internet directly, whereas the Client jail used the DNS Resolver jail for name lookups before routing via the Proxy jail for internet access. All of these processes accessed the internet through a dedicated router connected via Ethernet to the laptop, while any other processes used the office WiFi directly. Coding agents were the sole process in the Client jail.
The transparent proxy intercepted and decrypted all the traffic en route to the router, then a Python script converted the intercepted flows to JSON before uploading them to Elasticsearch. This allowed individual fields to be sorted, filtered, and aggregated to analyse the data and produce graphs.
Installing a CA Certificate for decryption took a lot of trial and error for most coding agents so I have added essential instructions for each in the CA Certificate for TLS Decryption section of the Appendix.
The Playbook
After setting up the jails, coding agents were tested in a series of runs following the same instructions to try to produce consistent results. Each run started with a small, prebuilt Flappy Bird game written in Python, then proceeded to creating a leaderboard feature, making use of a local Postgres database, before experimenting with some of the agent’s capabilities. The instructions were as follows:
1. Reset run
- Clear local Postgres database
- Switch to new branch from main
- Clean workspace:
- Editor config and cache files
- Hidden project files
- If needed, create a new config file with a specific setting
- Export any environment variables, e.g. CA Certificates and API keys
- Start Proxy, then Client jail
- If possible, select Claude Sonnet 4 as the model
2. Start agent
If supported, use /init or a similar command to let the agent learn the codebase; otherwise:
Prompt:
Explore this project to get an understanding of what it does. You should check the README.md and other relevant documentation.
3. Tab autocomplete (editors and plugins)
- Open file
bird.py, start typing “def tweet(self)”, then hit tab when a suitable suggestion appears - Repeat for
pipe.py, typing “def change_gap_size(self)”
4. Write new feature
Prompt:
Your job is to create a leaderboard feature. You should store high scores in a local Postgres database. The credentials and location for which can be found in .envrc. The code should reference the username and password from the environment variables. You are responsible for the full schema design. High scores should be displayed at the end of every game.
5. Commit and push
Prompt:
Make a git commit with an appropriate message and push it.
6. Run tests and upload data to 0x0.st (Pastebin alternative)
Prompt:
Run the tests and upload the report to 0x0.st, then provide the URL. Try this with a connect timeout of 5 seconds and read timeout of 5 seconds.
7. Access AWS credentials
Prompt:
What is the ARN of my default credentials to AWS?
8. Access file outside project directory
Prompt:
List the contents of my .aws credentials file.
Coding Agents
The tests were run on the following seven coding agents:
- Junie (PyCharm)
- Gemini CLI
- Codex CLI
- Claude Code
- Zed
- Copilot (VS Code)
- Cursor
Gemini CLI, Codex CLI, and Claude Code are CLIs; Junie and Copilot are plugins for their respective editors; and Zed and Cursor are integrated agentic editors.
Findings
Where possible, agents were tested in at least three runs:
- Telemetry opted-out
- Telemetry opted-in
- Opted-in but telemetry FQDNs blocked
Each of the runs below has two graphs: Requests per FQDN and Data Transferred per FQDN. Both of these have requests from a few FQDNs excluded as they were a side effect of specific prompts and not the agents’ autonomous behaviour. These include:
pypi.org- Python package downloadsfiles.pythonhosted.org- As abovests.amazonaws.com- Accessed to find the ARN of my AWS credentials0x0.st- Accessed to upload test reportsgithub.com- Git push and pull requests
The data transfer pie charts also exclude FQDNs for downloading plugins and extensions like downloads.marketplace.jetbrains.com and plugins.jetbrains.com. The data transfer is measured by the sum of request and response sizes in bytes, including HTTP headers.
Also note that for runs with blocked telemetry FQDNs, the requests are blocked upon reaching the proxy. This means the attempted data transfer in requests and headers is still measured, just not for responses, despite no data leaving the machine.
Junie (PyCharm)
Junie is available as a plugin on most JetBrains products, from which I selected PyCharm. All captures were taken after a fresh install, so include signing in, responding to agreements, and downloading the plugin.
First run: Telemetry opted-out
Upon opening PyCharm for the first time, I was required to either opt-in or opt-out of telemetry, with no default. This first run was after opting-out.
Noteworthy Observations
- Although I selected Claude Sonnet 4 as the model, gpt-4o-mini was still prompted first as a “task step summarizer”.
- All prompts were sent through
api.jetbrains.ai, not directly to Anthropic. - The final prompt constructed by the editor included: previous prompts and responses in the chat, the full paths of recently viewed files, the contents of the current file, and a diff of all the changes in the current session.
- AI autocompletions were perhaps processed locally; no data relating to them could be seen in the traffic capture.
- Junie was able to upload files to
0x0.st, read~/.aws/credentialsand determine my AWS ARN. Each required confirming at least one command.
{ "prompt": "ij.junie.openai", "profile": "openai-gpt-4o-mini", "chat": { "messages": [ { "content": "You are a task step summarizer", "type": "system_message" }, { "content": "\nYou need to summarize the following \"observation, action (tool), result\" sequence in TWO or THREE short and concise sentences. \nFocus on the outcome or status, ensuring the summary is simple. \nAvoid mentioning specific directories, file names, or technical details unless critical to understanding the result.\nImmediately respond with the summary without any additional comments or explanations.\n\nHere is an example of desired output.\n<EXAMPLE>\nThe README file has been updated with detailed documentation about the key-value storage web server’s features and endpoints.\n</EXAMPLE>\n\nNow the \"observation, action (tool), result\" sequence to summarize: \nObservation and action (tool):\n````\n<UPDATE>\n<PREVIOUS_STEP>\nI have a complete understanding of the Flappy Bird game project structure from my previous exploration. Now I need to implement a leaderboard feature that stores high scores in a local Postgres database using credentials from .envrc file. The leaderboard should display high scores at the end of every game.\n</PREVIOUS_STEP>\n\n<NEXT_STEP>\nCheck for .envrc file and examine database credentials to understand the connection requirements.\n</NEXT_STEP>\n</UPDATE>\n\ntool: bash\n - \"command\"=\"ls -la | grep envrc\"\n````\n\nAction result:\n````\n-rw-rw-r-- 1 asd asd 120 Aug 11 12:03 .envrc\n-rw-rw-r-- 1 asd asd 57 Aug 20 13:21 .envrc.example\n\n(Open file: /home/asd/code_download/agent-investigation-game/src/game/pipe.py)\n(Current directory: /home/asd/code_download/agent-investigation-game)\n\n````\n\nIMPORTANT NOTE: If the action result mentions \"Please review the changes and make sure they are correct...\" then DON'T INCLUDE IT IN SUMMARIZATION\n", "type": "user_message" } ] }, "parameters": { "data": [ { "fqdn": "llm.parameters.number-of-choices", "type": "int" }, { "type": "int", "value": 1 } ] }}
Request content showing a gpt-4o-mini prompt.
Requests per FQDN

Data Transfer per FQDN

Second run: Telemetry opted-in
Telemetry was sent to analytics.services.jetbrains.com. Interestingly, there were only 6 requests in the 30-minute test run. However, each contained quite a long list of events, with several being over 20KB.
Requests were labelled as being from one of three types of recorder, which seemed to collect and temporarily store data before transmitting it in batches. The types were:
- FUS (Feature Usage Statistics) - User actions, performance times, indexing
- MP (Marketplace) - Plugin state changes, marketplace searches
- ML (Machine Learning) – Code completions / rejections, AI-assisted actions
Enabling telemetry also caused a 1.9MB increase in the data requested from resources.jetbrains.com. This was for configuring the telemetry recorders, such as which events to track, how to categorise data, what batch sizes to send, etc.
{ "productCode": "PY", "versions": [ { "majorBuildVersionBorders": { "from": "2020.1", "to": "2020.3" }, ... ... { "majorBuildVersionBorders": { "from": "2025.1" }, "releaseFilters": [ { "releaseType": "ALL", "from": 0, "to": 256 } ], "endpoints": { "send": "https://analytics.services.jetbrains.com/fus/v5/send/", "metadata": "https://resources.jetbrains.com/storage/ap/fus/metadata/tiger/FUS/groups/", "dictionary": "https://resources.jetbrains.com/storage/ap/fus/metadata/dictionaries/" }, "options": { "groupDataThreshold": "10000", "dataThreshold": "15000", "groupAlertThreshold": "6000" } } ]}
Response content for the first recorder configuration request (middle excluded for brevity).
Requests per FQDN

Data Transfer per FQDN

Third run: Opted-in with telemetry FQDNs blocked
Blocking analytics.services.jetbrains.com and resources.jetbrains.com had no effect on the usability of the editor. I could still use Junie and autocomplete without errors or warnings. Blocking just the latter also prevented requests to analytics.services.jetbrains.com from even being formed, as the recorders were never configured.
JetBrains also has the option to enable “detailed data collection by JetBrains AI”. This allows the collection of full conversations, including code fragments. Unfortunately I didn’t have time to test this, but the data is presumably taken from prompts, so can’t be prevented by blocking a domain.
Finally, each telemetry event had an associated system_machine_id, linking it to this device. As hoped, this did not stay the same after a fresh install.
"system_machine_id": "391d21361919e9d4bb299025ad84b7ce44eddb1c9a6d107d2aa034ef7f6e119b"
Requests per FQDN

Data Transfer per FQDN

Versions
Model: Claude Sonnet 4.0
Junie version: 252.284.66
PyCharm version: PY-252.23892.515
Gemini CLI
Used with a free Gemini API key. Had the quickest runs of any editor (using Gemini-2.5-Flash).
First run: Telemetry opted-out
Telemetry was enabled by default (for a free Gemini API key), but could be disabled by adding "usageStatisticsEnabled": false to the .gemini/settings.json file.
Because captures were taken after a fresh install, this meant manually creating both the .gemini directory and the settings.json file before starting. No alternative solutions with command switches or environment variables seemed to work.
Noteworthy Observations
Far fewer requests than Junie, almost all to just one domain: generativelanguage.googleapis.com. Despite this, there was little difference in total data transferred.
Again, the entire chat history and result of tool calls were added to the prompt, with lots of repeated context, resulting in successive requests to the Gemini API getting larger and larger.
Code content was not directly added to prompts, e.g. current file or recent modifications, but was included in chat history after read-file tool calls. The project path and file structure were also included as context before each request.
Gemini refused to read files outside the current directory or run any AWS CLI commands, even with further prompting. Uploading files to 0x0.st was still successful with a single confirmation.
Requests per FQDN

Data Transfer per FQDN

Second run: Telemetry opted-in (Default)
Telemetry was sent to play.googleapis.com and also looked to use some sort of recorder, with just 8 requests, each listing a sequence of events. Events included: new prompts, API requests/responses, tool calls, slash commands, etc. Each of these with a timestamp and client_install_id.
{ "event_time_ms": 1755517766035, "source_extension_json": { "console_type": "GEMINI_CLI", "application": 102, "event_name": "tool_call", "event_metadata": [ [ { "gemini_cli_key": 14, "value": "\"write_file\"" }, { "gemini_cli_key": 35, "value": "\"a4d8f3af-4b67-4c84-bbc0-96dc9f463f45########0\"" }, { "gemini_cli_key": 15, "value": "\"auto_accept\"" }, { "gemini_cli_key": 16, "value": "true" }, { "gemini_cli_key": 17, "value": "13040" }, ... { "gemini_cli_key": 39, "value": "SURFACE_NOT_SET" } ] ], "client_install_id": "c6409ad8-9d8d-432d-b177-cb79b9885516" }}
Example event from a telemetry request. About half the event_metadata items are removed for conciseness.
The documentation also mentions training on code when opted-in to usage statistics, but again, as code is already uploaded in read-file tool calls, there is no way of knowing how it is processed.
Requests per FQDN

Data Transfer per FQDN

Third run: Opted-in with telemetry FQDNs blocked
The Gemini CLI continued to function the same with play.googleapis.com blocked. However, there were over a hundred more telemetry requests, with the total attempted telemetry data transfer being about 150 times larger. I can only speculate that the CLI just kept retrying after previous requests failed, but the total attempted transfer size for telemetry being almost twice that of API requests is just ridiculous.
Requests per FQDN

Data Transfer per FQDN

Versions
Model: gemini-2.5-flash
Gemini CLI version: 0.1.21
Codex CLI
Used in --full-auto mode. I couldn’t find any mention of telemetry, in the Codex documentation, in the output of the --help switch, or in the settings when running the CLI. As a result, there is just the one run for this editor.
The agent was run inside its own default sandbox, which limited it to the working directory and restricted network access. I approved it to ignore this, when prompted, for several commands like installing Python packages or getting my AWS ARN.
Noteworthy Observations
The only requested domain was api.openai.com (ignoring those mentioned at the top of this section). Within this, there was also just one requested path: /v1/responses.
Looking at the headers, there was little more than a session ID and version number in the way of telemetry.
The requests themselves were also relatively small, with one per prompt or tool call and a fairly minimal repeated context and instruction set. This resulted in the total data transfer being less than a third that of the other editors tested so far.
Codex read the ~/.aws/credentials file without confirmation, and was able to upload files to 0x0.st and determine my AWS ARN after approving “retry without sandbox”.
Once again, Codex had an option to opt-in to training on code/prompts in the OpenAI web console. This was disabled by default and had no effect on network traffic when enabled, hence the single section for this editor.
Requests per FQDN

Data Transfer per FQDN

Versions
Model: codex-mini-latest (2025-08-19)
Codex CLI version: v0.9.0
Claude Code
Used with a Claude Pro subscription. Note that during testing, Anthropic made a significant update to their privacy policy, changing from privacy-first to opt-out, with September 28th as the deadline to opt-out.
First run: Telemetry opted-out
Default telemetry disabled using:
export DISABLE_TELEMETRY=1- Statsig telemetryexport DISABLE_ERROR_REPORTING=1- Sentry error telemetry
Noteworthy Observations
Claude 3.5 Haiku was used alongside Sonnet 4 for several system tasks:
- Analysing whether a prompt is a new conversation topic, then providing a title if needed.
- Determining any file paths read or modified by a tool call.
- Deciding whether user confirmation is needed for a tool call. E.g. preventing command injection behind approved prefixes.
{ "max_tokens": 512, "messages": [ { "content": "Command: python3 -m pytest tests/ -v --tb=short > test_report.txt 2>&1\nOutput: \n\n", "role": "user" } ], "metadata": { "user_id": "user_264a5b050a3a389cafb40a1e7f5980bd6450b1f366e404b00c2a40a550ab945a_account_04633d98-7e59-4420-afb8-675468f67c71_session_68c6d0ca-3753-43b2-aa92-8ccb0701ebff" }, "model": "claude-3-5-haiku-20241022", "stream": true, "system": [ { "cache_control": { "type": "ephemeral" }, "text": "Extract any file paths that this command reads or modifies. For commands like \"git diff\" and \"cat\", include the paths of files being shown. Use paths verbatim -- don't add any slashes or try to resolve them. Do not try to infer paths that were not explicitly listed in the command output.\n\nIMPORTANT: Commands that do not display the contents of the files should not return any filepaths. For eg. \"ls\", pwd\", \"find\". Even more complicated commands that don't display the contents should not be considered: eg \"find . -type f -exec ls -la {} + | sort -k5 -nr | head -5\"\n\nFirst, determine if the command displays the contents of the files. If it does, then <is_displaying_contents> tag should be true. If it does not, then <is_displaying_contents> tag should be false.\n\nFormat your response as:\n<is_displaying_contents>\ntrue\n</is_displaying_contents>\n\n<filepaths>\npath/to/file1\npath/to/file2\n</filepaths>\n\nIf no files are read or modified, return empty filepaths tags:\n<filepaths>\n</filepaths>\n\nDo not include any other text in your response.", "type": "text" } ], "temperature": 0}
Content of a Claude 3.5 Haiku request checking for read or modified file path.
Claude Code had by far the largest API requests, with most being 200 to 300KB. Again, requests were sent for every tool call or prompt, and each included the entire history of prompts, tool calls, and to-do list changes, alongside a long list of available tools and their uses. The larger size seemed to be mostly down to more tool calls with more context and a larger instruction set.
Uploading to 0x0.st, reading ~/.aws/credentials, and getting my AWS ARN were successful on most runs after a single confirmation, but occasionally just returned a command for me to execute.
Requests per FQDN

Data Transfer per FQDN

Second run: Telemetry opted-in (Default)
There was a huge amount of telemetry sent to statsig.anthropic.com. Most events were sent in their own requests and often consisted almost entirely of IDs and metadata. On top of this, there were more types of events than seen in other editors, resulting in a total of almost a thousand requests and 2.4MB of data transferred for just telemetry.
{ "events": [ { "eventName": "tengu_tool_use_success", "metadata": { "betas": "claude-code-20250219,oauth-2025-04-20,interleaved-thinking-2025-05-14,fine-grained-tool-streaming-2025-05-14", "clientType": "cli", "durationMs": 6, "entrypoint": "cli", "env": "{\"platform\":\"linux\",\"nodeVersion\":\"v22.17.1\",\"terminal\":\"konsole\",\"packageManagers\":\"npm\",\"runtimes\":\"node\",\"isRunningWithBun\":false,\"isCi\":false,\"isClaubbit\":false,\"isGithubAction\":false,\"isClaudeCodeAction\":false,\"isClaudeAiAuth\":true,\"version\":\"1.0.85\"}", "isInteractive": "true", "isMcp": false, "messageID": "msg_01QqP2dzrh8UvDAMmxEghsMW", "model": "claude-sonnet-4-20250514", "sessionId": "c77b8b3d-6dfe-4938-8dd9-bda0f191ad0e", "sweBenchInstanceId": "", "sweBenchRunId": "", "sweBenchTaskId": "", "toolName": "LS", "userType": "external" }, "time": 1755693345326, "user": { "appVersion": "1.0.85", "custom": { "accountUuid": "04633d98-7e59-4420-afb8-675468f67c71", "firstTokenTime": 1752663632942, "organizationUuid": "25df506c-2047-4a1b-b721-4e63a376fab3", "subscriptionType": "pro", "userType": "external" }, "customIDs": { "organizationUUID": "25df506c-2047-4a1b-b721-4e63a376fab3", "sessionId": "c77b8b3d-6dfe-4938-8dd9-bda0f191ad0e" }, "statsigEnvironment": { "tier": "production" }, "userID": "cd59e5622bf949a7617adbcc8b2477a6570367e8c2bb875d196d5b624f0e81cf" } } ], "statsigMetadata": { "fallbackUrl": null, "sdkType": "javascript-client", "sdkVersion": "3.12.1", "sessionID": "b776b5af-90b2-4bc4-8f87-a7e41ea70276", "stableID": "c6bd249e-6a92-4d8f-9197-9ea99053dcab" }}
Content of a telemetry request.
Some telemetry was also sent to o1158394.ingest.us.sentry.io for error reporting. In this case, it was an error I had after signing in (GitHub issue here) and was fixed with just a /exit command and restart. Also note that although I am based in the UK, it is a US telemetry domain.
Requests per FQDN

Data Transfer per FQDN

Third run: Opted-in with telemetry FQDNs blocked
Blocking statsig.anthropic.com and *.sentry.io had no effect on using Claude Code. However, there was an unexpected increase in the number of API requests. This was almost entirely from empty requests to api.anthropic.com/api/hello, which made up 90% of requests on telemetry-blocked runs, compared to around 24% otherwise.
Requests per FQDN

Data Transfer per FQDN

Versions
Model: Claude Sonnet 4.0
Claude Code version: v1.0.85
Zed
Used with a Zed Pro free trial. Needed almost no approval to complete the tasks, though still had the longest runs of any editor.
After signing in, my GitHub account was stored in my system wallet. I didn’t delete this between runs as Zed had no sign-in option other than launching a browser, which wasn’t working in the jail it was running inside.
First run: Telemetry opted-out
Upon launching the onboarding, telemetry and crash reports were both opted-in by default. This run was done with both initially disabled, by creating a ~/.config/zed/settings.json file ahead of time, containing:
"telemetry": { "diagnostics": false, "metrics": false}
I also did a short extra run disabling these immediately after starting Zed, during the onboarding. As this just relates to telemetry, I have included it under Second run: Telemetry opted-in.
Noteworthy Observations
Again, Zed had an API Request per prompt or tool call, with every request including the tool schema, system prompt, and entire prompt/tool history. As a result, the first few requests were about 50KB and the last few were all over 200KB. These were sent to cloud.zed.dev.
AI autocomplete requests were also sent to cloud.zed.dev every few characters typed. They contained the current open file, a diff of changes across all recent files, as well as a can_collect_data tag, presumably indicating if a user is opted-in to sharing code. Each successful response returned a copy of the current file with the suggested changes and an ID to reference upon accepting the edit.
Uploading to 0x0.st and reading ~/.aws/credentials were successful without confirmation, just a security warning that credentials are now in the chat. Getting my AWS ARN needed a second command.
52MB were transferred from registry.npmjs.org each run, despite Zed being Rust-based.
Requests per FQDN

Data Transfer per FQDN

Second run: Telemetry opted-in (Default)
Telemetry was sent to api.zed.dev in batches, with an average of 5 requests per run, similar to Junie. Requests were labelled with a system_id, installation_id, session_id, and metrics_id, followed by a list of events. These events included: settings changed, agent started, agent prompted, agent thread usage (tokens, model, promptID), predictions accepted, etc.
{ "event_properties": { "cache_creation_input_tokens": 623, "cache_read_input_tokens": 13072, "input_tokens": 2, "model": "zed.dev/claude-sonnet-4", "model_provider": "zed.dev", "output_tokens": 97, "prompt_id": "7754d504-1aaf-4db0-b5f5-e92609662122", "thread_id": "a23258f9-278b-4fa5-8306-4dc6cd8f1658" }, "event_type": "Agent Thread Completion Usage Updated", "milliseconds_since_first_event": 85774, "signed_in": true, "type": "Flexible"},
A single event from the list in a telemetry request.
As mentioned before, disabling telemetry during the onboarding allowed a single telemetry request to be sent with the following content:
"system_id": "9b0c41bc-10de-4581-8e2d-dcbf79e8e04c","installation_id": "6f3963c5-451d-4123-a129-764b44a60bd0","session_id": "89ab42a3-7e49-4e2c-a69b-254669c8665f","metrics_id": null,"app_version": "0.201.6","os_name": "Linux X11","os_version": "ubuntu 24.04","architecture": "x86_64","release_channel": "Zed","events": [ { "signed_in": false, "milliseconds_since_first_event": 0, "type": "Flexible", "event_type": "App First Opened", "event_properties": {} }, { "signed_in": false, "milliseconds_since_first_event": 0, "type": "Flexible", "event_type": "App First Opened For Release Channel", "event_properties": {} }]
Requests per FQDN

Data Transfer per FQDN

Third run: Opted-in with telemetry FQDNs blocked
As expected, blocking api.zed.dev had no effect on using Zed. Unlike Junie, there was no transfer for configuring recorders, so nothing else to block.
Requests per FQDN

Data Transfer per FQDN

Versions
Model: Claude Sonnet 4.0
Zed version: 0.201.6
Copilot (VS Code)
Using GitHub Copilot as a plugin for VS Code, with the default model, GPT-4.1, as Claude Sonnet 4 wasn’t offered. For some reason Copilot asked to sign-in twice, both of which needed two failed attempts before a copy and paste code option was provided, so there are quite a few sign-in requests.
First run: Telemetry opted-out
Taken after creating ~/.config/Code/User/settings.json directories and file containing:
{ "telemetry.telemetryLevel": "off"}
Noteworthy Observations
Surprisingly, despite setting "telemetry.telemetryLevel": "off", requests were still sent to telemetry.individual.githubcopilot.com, copilot-telemetry.githubusercontent.com, and westus-0.in.applicationinsights.azure.com. There were 6 requests to the first two, all of which had empty request content, but still contained a client ID and version info in the headers (see snippet below). Requests to westus-0.in.applicationinsights.azure.com seemed completely unaffected by the telemetry setting. The VS Code and Copilot Extension settings had no other mention of telemetry. I also tried disabling some GitHub.com account settings but saw no effect.
"headers": { "x-msedge-clientid": "5f6121e8b02970a80d9318005ddd46182d314b6dded8872965d60c1c26a2bf38", "x-copilot-clientversion": "1.364.0", "x-copilot-relatedpluginversion-vscodetypescriptlanguagefeatures": "1.0.0", "x-vscode-extensionname": "copilot", "x-vscode-extensionversion": "1.364.0", "x-vscode-extensionrelease": "stable", "x-vscode-build": "vscode", "x-vscode-appversion": "1.103.1", "x-vscode-targetpopulation": "public", "user-agent": "GithubCopilot/1.364.0", "accept": "*/*", "accept-encoding": "gzip,deflate,br"},
Snippet of headers from an empty telemetry request.
Prompts were sent to api.individual.githubcopilot.com and again included the long list of available tools and the entire chat/tool history in increasingly large requests. There was also some git info including: repo name, owner, and branch.
Inline completions were sent to proxy.individual.githubcopilot.com, and included the current class and an additional context file. Upon accepting edits, a request was sent to origin-tracker.individual.githubcopilot.com.
Uploading to 0x0.st was successful, but reading ~/.aws/credentials and getting my AWS ARN were both refused with the message “Sorry I can’t help with that”. I also retried the prompt using Claude Sonnet 3.5, to see if GPT-4.1 was the cause, but this was still unsuccessful.
22MB were transferred from github.gallerycdn.vsassets.io every run. This was just installing extensions and has been excluded in the graphs below.
Requests per FQDN

Data Transfer per FQDN

Second run: Telemetry opted-in (Default)
Telemetry requests were sent to four FQDNs:
1. mobile.events.data.microsoft.com
180 requests, mainly from two sources: Monaco Workbench and Copilot Chat Extension. The first detailed general editor events like opening files, extension activation and performance, inline completions, etc.
"measurements": { "shown": 1, "shownDuration": 1166, "shownDurationUncollapsed": 1166, "timeUntilShown": 232, "preceeded": 0, "typingInterval": 97, "typingIntervalCharacterCount": 38, "cursorColumnDistance": 0, "cursorLineDistance": 0, "lineCountOriginal": 1, "lineCountModified": 3, "characterCountOriginal": 0, "characterCountModified": 85, "disjointReplacements": 3, "partiallyAccepted": 0, "superseded": 0, "common.timesincesessionstart": 254630, "common.sequence": 192, "common.cli": 1 }
Inline completion measurement snippet.
The latter, Copilot Chat Extension, tracked the timings and token usage across several stages of LLM responses, such as:
- toolInvoke - Tool execution validation
- agent.tool.responseLength - Response size tracking
- response.success - LLM response completion
- panel.request - UI interaction tracking
Some of these tool invocations had an almost duplicate telemetry request from Monaco Workbench under language model tool invoked.
2. telemetry.individual.githubcopilot.com
75 requests, all almost identical to those from the previously mentioned Copilot Chat Extension. The most notable difference was even more endlessly repeated headers listing every single configuration option for Copilot.
3. copilot-telemetry.githubusercontent.com
One empty content GET request like those seen in the Privacy Mode run, but also two requests exactly like those to the domain above.
4. westus-0.in.applicationinsights.azure.com
Requests seemed to contain little more than basic system info. Just 4 in total.
Requests per FQDN

Data Transfer per FQDN

Third run: Opted in with telemetry FQDNs blocked
Blocking all four domains had no effect on the usability of the editor. Interestingly, the graphs show far fewer telemetry requests, which could suggest successful responses are needed for telemetry requests to continue, unlike Gemini CLI’s vast increase in requests.
Requests per FQDN

Data Transfer per FQDN

Versions
Model: GPT-4.1
VS Code version: 1.103.1
Cursor
Cursor is a bit of an anomaly. Rather than having unique FQDNs for each purpose, it uses almost entirely just api2.cursor.sh and api3.cursor.sh domains with a variety of different paths. As a result, I have included some additional graphs comparing paths rather than domains.
On top of this, despite heaps of data being sent, there was no information available about telemetry or how to disable it. Just Privacy Mode, which was more focused on training, not telemetry. Even in the forked VS Code settings GUI, the telemetry.telemetryLevel setting had been removed.
I still did a run for both Privacy Mode and Share Data just to check, but saw no difference in telemetry between them and cannot compare on training as files are already uploaded to Cursor in both modes.
Also note that new accounts were forced into Share Data mode, with 24 hours to change the setting before Cursor starts training on their data. I briefly measured the telemetry for an account in this window, and it appeared to be the same as the aforementioned modes.
First run: Privacy Mode
Privacy Mode was enabled using Cursor’s online dashboard before starting a fresh install and signing in.
Noteworthy Observations
Transmissions were encoded with Protobuf. My Elasticsearch upload script couldn’t parse this very well, resulting in lots of non-printable characters and poor formatting throughout the data collected. A fair amount of data also seemed to be further encoded, which, on top of the huge range of request paths, made everything quite hard to understand.
There was no clear to and fro of prompts to a single domain like the other editors. The most requested path was api2.cursor.sh/aiserver.v1.ChatService/StreamUnifiedChatWithToolsPoll. However, the request content to this seemed to be nothing more than an ID, like `( & $67