- Easy to use (very little friction, although command + L to open chat could be nice, first time using I couldn't locate the chat pane, although the humility to have it tucked away was nice :) )
- It generates and asks to run preliminary code e.g. str(object) to see what it's working with), wasn't expecting this, nice!
- Has most (all?) the niceties cursor has slowly added over the months
- Small thing, but I'd definitely prefer that when it generates code it places it in a script for me to read and run line by line, rather than showing the code in the chat pane and expecting the user to select between Run (all of it!) or Cancel. I'd generally never run unfamiliar code all at once, let alone LLM-generated code, but much rather read the code in the script pane where I can modify, comment, edit, and eventually run line by line.
I've used cursor (in tandem with RStudio) a fair bit for R/shiny development. After half an hour playing with this, I'm pretty confident this is a better experience, one I'd prefer over cursor/RStudio. Well done.
Thanks for the comments! We're working on the shortcuts but that should come soon. If you edit the code in the conversation before you hit run, it'll actually let you modify it and then run that code when you hit Run. (Try "Generate 10 normals in the console" and then change it to 20 when it gives rnorm(10).)
Can you say a bit more about what you mean with line-by-line running from a script? Would the script show up in the editor pane (top left) and then you'd run the script line by line and the AI would see the outputs from that code being run?
I wouldn't expect the AI to see the output of the human running code line by line, but simply that when AI generates code, it places it in the script pane (just how cursor typically edits file(s) to place its generated code in the right places within those files).
For quick and dirty EDA in R, the appropriate place is probably just going to be the bottom of the current script. For example here [0] Rao gives the requested code; rather than being presented with Run/Cancel options, I'd prefer an 'Accept' button that places the code at the end of the current script in the script pane so it can be read by the human and run a line at a time (to verify that it works each step of the way)
Makes sense - we're planning to have "Rao rules" (essentially Cursor rules) out tomorrow, so you'll be able to include an instruction that it should always append to some file you have open. Hopefully it obeys that and then you can run the code from there.
>Small thing, but I'd definitely prefer that when it generates code it places it in a script for me to read and run line by line, rather than showing the code in the chat pane and expecting the user to select between Run (all of it!) or Cancel.
I agree, I am one of many R users whose workflow for science is still, I'm sheepish to admit, "select some code and run it, then use the variable explorer."
On the whole though I'm very excited to see agentic abilities coming to R.
Thanks for the comment! As Will mentioned below, hopefully "Rao rules" will help with this. If not, we'll think about intuitive ways to allow the user to run individual segments of code before running/accepting all the changes.
Posit the makers of RStudio is a Public Benefit Corporation. Basically all of their products are open source. Something about making a fork and charging money for it doesn't sit great.
I've wondered when something like this will pop-up. Cursor just doesn't lend itself very well to interactive data work. I actually even tried to put together something similar myself over Christmas break as a PoC: https://github.com/demirev/radian
Is the pricing paying for access to models, or is it expected that users must have their own subscriptions/API access to OpenAI/Anthropic/whichever model providers?
If the former, is there a non-paid option for people to bring their own model access? If the latter, what is the subscription pricing paying for? I am in favor of subscriptions for on-going services/costs, but in the absence of on-going costs, I'd prefer a pay-once option.
The pricing pays for on-going access to models. Users are not expected to have their own OpenAI/Anthropic API keys.
There is currently not a non-paid option that allows users to bring their own models. If you are really interested in a feature like that though, we'd be happy chat. Feel free to reach out at jorgeguerra@lotas.ai
Positron IDE is a VS Code fork intended for R language. It feels more modern than R Studio and I was under the impression that it would replace it at some point.
That raises two questions:
Does GitHub Copilot or your extension works in Positron IDE ?
Somehow, I assumed that a Cursor-like capability for RStudio would be implemented as an add-in extension, not via fork. Does this mean that every new release of RStudio will require a rebuild by Lotas and a re-download by its users?
It can, but it'll take more set up since there's a backend we have configured that does the actual LLM communication. You'd need to set that up internally for an on-prem deployment. Can you send us a message (https://www.lotas.ai/contact) if you want to talk more?
Positron is solid, but we think our search and apply-edit is better at the moment and we think more people use RStudio right now.
Practically, Positron also requires you to use your own LLM keys (so you're on the hook for the tokens/long context, and if you don't have a BAA or ZDR, you may not be able to use it with sensitive data). For Rao, we manage the tokens and are moving quickly towards HIPAA/SOC 2.
We also plan to develop something similar to Positron in the future.
Positron Assistant will eventually support models other than BYOK, including connecting to models in Bedrock, local models, hosted models in Copilot, etc.
Also note that Positron perma-unlocks the internal APIs that Microsoft Copilot Chat uses, so any extension can use them. "Positron Assistant" is itself mostly implemented as an extension. So it's possible for a 3rd party extension to become a chat participant in Positron or integrate with the Chat panel/inline chat/etc.
Observations:
- Easy to use (very little friction, although command + L to open chat could be nice, first time using I couldn't locate the chat pane, although the humility to have it tucked away was nice :) )
- It generates and asks to run preliminary code e.g. str(object) to see what it's working with), wasn't expecting this, nice!
- Has most (all?) the niceties cursor has slowly added over the months
- Small thing, but I'd definitely prefer that when it generates code it places it in a script for me to read and run line by line, rather than showing the code in the chat pane and expecting the user to select between Run (all of it!) or Cancel. I'd generally never run unfamiliar code all at once, let alone LLM-generated code, but much rather read the code in the script pane where I can modify, comment, edit, and eventually run line by line.
I've used cursor (in tandem with RStudio) a fair bit for R/shiny development. After half an hour playing with this, I'm pretty confident this is a better experience, one I'd prefer over cursor/RStudio. Well done.
Thanks for the comments! We're working on the shortcuts but that should come soon. If you edit the code in the conversation before you hit run, it'll actually let you modify it and then run that code when you hit Run. (Try "Generate 10 normals in the console" and then change it to 20 when it gives rnorm(10).)
Can you say a bit more about what you mean with line-by-line running from a script? Would the script show up in the editor pane (top left) and then you'd run the script line by line and the AI would see the outputs from that code being run?
I wouldn't expect the AI to see the output of the human running code line by line, but simply that when AI generates code, it places it in the script pane (just how cursor typically edits file(s) to place its generated code in the right places within those files).
For quick and dirty EDA in R, the appropriate place is probably just going to be the bottom of the current script. For example here [0] Rao gives the requested code; rather than being presented with Run/Cancel options, I'd prefer an 'Accept' button that places the code at the end of the current script in the script pane so it can be read by the human and run a line at a time (to verify that it works each step of the way)
[0] https://imgur.com/a/8x4Ykdt
Makes sense - we're planning to have "Rao rules" (essentially Cursor rules) out tomorrow, so you'll be able to include an instruction that it should always append to some file you have open. Hopefully it obeys that and then you can run the code from there.
>Small thing, but I'd definitely prefer that when it generates code it places it in a script for me to read and run line by line, rather than showing the code in the chat pane and expecting the user to select between Run (all of it!) or Cancel.
I agree, I am one of many R users whose workflow for science is still, I'm sheepish to admit, "select some code and run it, then use the variable explorer."
On the whole though I'm very excited to see agentic abilities coming to R.
Thanks for the comment! As Will mentioned below, hopefully "Rao rules" will help with this. If not, we'll think about intuitive ways to allow the user to run individual segments of code before running/accepting all the changes.
Yea, lets make R even more arcane by letting a llm who barley seen any R code attempt to write some.
Some people just want to see the world burn while they roll to the bank.
Posit the makers of RStudio is a Public Benefit Corporation. Basically all of their products are open source. Something about making a fork and charging money for it doesn't sit great.
I've wondered when something like this will pop-up. Cursor just doesn't lend itself very well to interactive data work. I actually even tried to put together something similar myself over Christmas break as a PoC: https://github.com/demirev/radian
FYI "Radian" is also the name of a very popular R console: https://github.com/randy3k/radian
Not a fan of coding agents but really appreciate the indirect love for RStudio.
> RStudio is used by about 2 million data scientists and academics, but they currently lack a coding assistant within their IDE.
Not to take anything away from your announcement, but Github Copilot works in RStudio.
GitHub co-pilot is great for line completion, but we meant more like Cursor's agentic ability.
Is the pricing paying for access to models, or is it expected that users must have their own subscriptions/API access to OpenAI/Anthropic/whichever model providers?
If the former, is there a non-paid option for people to bring their own model access? If the latter, what is the subscription pricing paying for? I am in favor of subscriptions for on-going services/costs, but in the absence of on-going costs, I'd prefer a pay-once option.
The pricing pays for on-going access to models. Users are not expected to have their own OpenAI/Anthropic API keys.
There is currently not a non-paid option that allows users to bring their own models. If you are really interested in a feature like that though, we'd be happy chat. Feel free to reach out at jorgeguerra@lotas.ai
Positron IDE is a VS Code fork intended for R language. It feels more modern than R Studio and I was under the impression that it would replace it at some point. That raises two questions: Does GitHub Copilot or your extension works in Positron IDE ?
Right now our assistant is only available in RStudio. We do plan to develop an assistant for Positron-like IDEs in the future though.
Positron is made by Posit, which is formerly the R Studio Company. So I would say its basically the new R Studio.
Somehow, I assumed that a Cursor-like capability for RStudio would be implemented as an add-in extension, not via fork. Does this mean that every new release of RStudio will require a rebuild by Lotas and a re-download by its users?
There's a lot of that had to be changed at a pretty deep level to build this assistant. So an add-on wasn't really feasible.
This tactic is usually used to attract VC money down the road. VCs don't typically invest in plugins/add-ons; they prefer products.
Can this be run on-prem (like rstudio server) against OpenAI API compatible LLMs (i.e internal deployments)?
It can, but it'll take more set up since there's a backend we have configured that does the actual LLM communication. You'd need to set that up internally for an on-prem deployment. Can you send us a message (https://www.lotas.ai/contact) if you want to talk more?
Is this better than Positron Assistant? https://positron.posit.co/assistant.html
Positron is solid, but we think our search and apply-edit is better at the moment and we think more people use RStudio right now.
Practically, Positron also requires you to use your own LLM keys (so you're on the hook for the tokens/long context, and if you don't have a BAA or ZDR, you may not be able to use it with sensitive data). For Rao, we manage the tokens and are moving quickly towards HIPAA/SOC 2.
We also plan to develop something similar to Positron in the future.
Positron Assistant will eventually support models other than BYOK, including connecting to models in Bedrock, local models, hosted models in Copilot, etc.
Also note that Positron perma-unlocks the internal APIs that Microsoft Copilot Chat uses, so any extension can use them. "Positron Assistant" is itself mostly implemented as an extension. So it's possible for a 3rd party extension to become a chat participant in Positron or integrate with the Chat panel/inline chat/etc.
This looks really useful for scientific analysis — looking forward to trying it!
Hope it gets better