PyDev adventures (original) (raw)

tag:blogger.com,1999:blog-85509622026-03-18T04:40:01.106-07:00PyDev adventuresPosting about venturing (and creating) PyDev.

LINKS: PyDev.org Blog RSS Twitter RSS

Fabio Zadroznyhttp://www.blogger.com/profile/04202246218394712738noreply@blogger.comBlogger325125tag:blogger.com,1999:blog-8550962.post-82992513901009438542025-01-17T12:43:00.001-08:002025-01-21T07:33:46.625-08:00Using (or really misusing) Path.resolve() in Python

 

I just stumbled upon a code in Python that uses a simple `Path(...).resolve()` after receiving a path in an API and suddenly reminded myself on the many the bugs I've tripped over due to it, so, decided to share  why I hope at some point this function stops being so pervasive as it's usually the wrong thing to call.. (maybe because when Python went from os.path to pathlib they added no real replacement and now you need to use both).

The main reason of bugs for that is that when you call Path.resolve() it'll actually resolve symlinks (and substs on Windows).

This means that if the user crafted some structure as

/myproject

/myproject/package.json

/myproject/src -> symlink to sources in /allmysources/temp

 If you resolve `/myproject/src/myfile.py` to `/allmysources/temp/myfile.py` and actually want to get to the `package.json`, it'll never be found!

 I was even looking at the python docs and found this nugget when talking about `Path.parent`:

If you want to walk an arbitrary filesystem path upwards, it is recommended to first call Path.resolve() so as to resolve symlinks and eliminate ".." components.

And that's completely why you shouldn't use it because if you do, then the parents will be completely off!

So, what should you do?

Unfortunately pathlib is not your friend here, you need to use `os.path.normpath(os.path.abspath(...))` to remove `..` occurrences from the string and make it absolute (and then after, sure, use pathlib for the remainder of your program) -- possibly you even want to get the real case in the filesystem if you're on Windows (which is very annoying as then you end up needing to do a bunch of listdir() calls to get the case stored in the filesystem).

 -- but also, keep in mind you usually just need to do that on boundaries (i.e.: when you receive a relative path as an argument in the command line for instance, not when you receive an API call from another program -- chances are, the cwd of that program and your own are different and thus calls to `absolute()` or `resolve()` are actually bugs).

But isn't there any use-case for `Path.resolve()`?

As far as I know, the only case where it's valid is if you do want to create a cache of sorts where any representation of that file is considered the same (i.e.: you really want the canonical representation of the file). 

Say, in an IDE you opened your `subst` in `x:` and then you have `c:\project\foo.py` and `x:\foo.py` in a debugger and you want to always show the version that's opened under your IDE, you need to build that mapping so that regardless of the version that's accessed you will open the version that's being seen in your IDE (and oh my god, I've seen so many IDEs get that wrong and the effect is usually you'll have the same file opened under different names in the IDE -- I know that in the pydevd debugger this is especially tricky because the name of the file is gotten from a .pyc file, which depends on how it was generated and the cached version may not match the version that the user is currently seeing -- the  debugger goes through great lengths to have an internal canonical representation and a different version which should be what the user sees, but it's not perfect and IDEs which actually open the file afterwards don't make the due diligence to try to do the proper mapping afterwards -- even worse when they fail because the path is the same but the drive is uppercase and internally the path they have has the drive in lowercase).

Maybe even then it'd be better to use the inode of the file for a reference (gotten from `Path.stat()`)...

Anyways, to sum up, in its current format, I'd just recommend to ALMOST NEVER use `resolve()` as it's usually just the source of bugs, keep to `Path(os.path.normpath(os.path.abspath(...)))` on program boundaries as that's just saner in general (and when you want to pass files among APIs, make sure paths are already absolute and normalized and fail if they aren't).

p.s.: just as a disclaimer, I've also seen a very minor subset of users on Linux (just one so far really, but maybe there are others that want that behavior out there) that say they want the symlinks resolved as the created structure they have is always final and the symlinks are just a commodity to avoid having to cd into that structure so that they can freely cd into symlinks (and as such they don't want to affect the structure when they created symlinks), but I don't think this is the most common behavior (and if it is, then programs probably need to have a flag determining on whether to use it or not (for instance, I know vite from javascript has a resolve.preserveSymlinks setting so that the user can decide if they want it or not) -- although it's also true that most users don't even care, because they don't put their source code in a symlink or Windows subst 😊

Fabio Zadroznyhttp://www.blogger.com/profile/04202246218394712738noreply@blogger.com0tag:blogger.com,1999:blog-8550962.post-32288777831048374022024-02-04T02:51:00.000-08:002024-02-04T02:51:45.154-08:00PyDev Debugger and sys.monitoring (PEP 669) -- i.e.: really fast debugging for Python 3.12!

The latest release of PyDev (12.0.0) is now available and it brings a really nice speed improvement for those who are already in Python 3.12! -- If you're a LiClipse user, it's now available in LiClipse 11.

The PyDev Debugger now uses sys.monitoring, which enables faster debugging (on my tests it can be up to 15 times faster than the version using sys.setttrace, depending on the use case -- kudos to Mark Shannon for PEP 669 😉).

It took me a while to cover the many scenarios that pydevd deals with, but the good thing is that most of the infrastructure available in pydevd didn't need changes (the debugger already had all the concepts needed as it already tried to trace only the frames which were actually needed, which definitely helped a lot as this is now pretty central in how to deal with the tracing using sys.monitoring)

Given that it's now out, I'll talk about how it works and some of the caveats when using it.

So, the first thing to note is that PEP 669 defines a bunch of callbacks which are now related to the code object (and not the frame as happened with sys.settrace) and when inside one of those callbacks the debugger can react to decide what should happen.

Some things that could happen could be pausing due to a breakpoint or a step instruction or deciding that the given code should not be traced again (by returning a DISABLE).

The reason it becomes faster than the previous approach is that the DISABLE is then considered by the Python interpreter which then bakes that DISABLE into the code when it's executing (up until sys.monitoring.restart_events() is called again). This does come with a big caveat though: if there are multiple threads running the program and one of those threads returns a DISABLE instruction then the related callback will actually be disabled for all the threads. Note that if DISABLE is not returned, the speed ends up being close to what was available with sys.settrace (which wasn't all that bad in pydevd already, but definitely a step down from what is now available).

This means that the debugger is really much faster when going for a breakpoint (because it can DISABLE many of the tracing instructions), but after a breakpoint is hit, if one thread is doing a step operation, then the speed reverts back to being close to the sys.settrace variant (the debugger can still DISABLE tracing for places it knows the user never wants to stop, but it cannot DISABLE the tracing for any code in any thread where the user may want to stop, because the thread which is stepping could be affected by a DISABLE in another thread which is not stepping, or at least the stepping must be considered for all threads even if a given thread hasn't really stopped and is not stepping).

Also, it's worth to mention that sys.monitoring has finer grained events vs. the ones available in sys.settrace, which is good as for instance, it's possible to get events from exceptions separate from events related to entering/returning from a function or lines, which helps the debugger in tracing only what's actually needed (the tracing debugger had to do lots of gymnastics to create a separate tracer just for exceptions when there would be no need to trace a given scope for lines, so, the new code ends up being both simpler and faster).

Note however that in pydevd I've done some other improvements and stepping should be more responsive even when using sys.settrace with older versions of Python!

p.s.: the attach to process is still not available for Python 3.12

Note: back in the day other players which make use of pydevd such as JetBrains and Microsoft did step up to finance it, but at this point the only support it has is from backers in the community.

So, if you enjoy using pydevd please consider becoming a supporter... I've actually just setup sponsoring through GitHub sponsorships (https://github.com/sponsors/fabioz). Any takes on becoming one of the first backers there 😉?

Fabio Zadroznyhttp://www.blogger.com/profile/04202246218394712738noreply@blogger.com0tag:blogger.com,1999:blog-8550962.post-85156432436899575002023-06-23T12:39:00.001-07:002023-06-23T12:39:55.826-07:00robocorp.log: a library to answer what happened in a Python run.

It's been a while since I don't post, so, I decided to shed some details on what I'm working now (which I think is a really nice -- open source -- tool for the Python ecosystem -- in my view it's something close to a "Time travel debugger" for Python, although it's mostly labelled as automatic logging for Python 😄).

I'll start with a bit of history though: I'm working with Robocorp right now (Robocorp is a company which focuses on automation projects -- it leverages and provides many open source tools, especially in the Python ecosystem).

A few years back Robocorp approached me to do a language server for Robot Framework (which started as a testing framework done in Python and right now is also used for doing automations, which Robocorp recommends for many clients doing automations -- although many clients also prefer to just use Python directly). 

Now, one of the strong points of Robot Framework is that it generates an output which provides information on everything that happened during its execution (Keyword calls -- which is how "methods" are called in Robot Framework, if statements, return values, etc), so, it's almost a "Time travel" debugger as it records and shows information on everything that happened in the execution.

Now, this brings me to what I'm working right now: a library which records what happens inside a Python process -- along with a UI which makes it possible to inspect it afterwards (its the Python log counterpart of Robot Framework).

For those curious I've created a repo with one example showing the output from Robot Framework and from the one generated by the Robocorp's Python Framework: https://github.com/fabioz/log_examples.

You can also see a live example (which solves the https://rpachallenge.com/) output of Robot Framework log.html as well as the output of Robocorp's Python Framework log.html.

The easiest way to use it right now is by using the tasks from Robocorp's Python Framework (see: https://github.com/robocorp/robo/tree/master/tasks as the log will be automatically generated for tasks run through it -- mostly, mark your entry points with @task (from robocorp.tasks import task) and run with `python -m robocorp.tasks run` and get your `log.html` in the `output` showing method calls, if statements, assigns, returns, etc that happened in your run).

One question I got a few times is how does it work... Well, after working quite a bit on pydevd and debugpy one of the things scratched right from the start was trying to use Python debugger infrastructure due to a simple fact: if the debugger infrastructure was used then no one could actually debug the code while using the logging framework.

In the end, inspiration ended up coming from PyTest. p.s.: thanks to Bruno Oliveira for a discussion back in the day about how assertions were rewritten using import hooks to provide nicer messages in assertion failures in PyTest.

robocorp.log uses the same approach as PyTest (import hooks + ast rewriting), but instead of rewriting asserts it rewrites the whole method to add callbacks on what's happening.

So, something as:

def method():
    a = 1

roughly becomes something as:

def method():
    report_method_start(..., 'method')
    a = 1
    report_assign(..., 'a', a)
    report_method_end()

The real output is a bit more contrived as it needs to deal with exceptions, yields, making sure the stack is correct, but I hope you get the idea.

Then, the callbacks are converted into a kind of journal of what happens and that is then fed to the log.html (the idea is having it directly in a view in VSCode in the future so that you can see the log being created in real time -- right now the info is added to a bunch of "robolog" files and embedded into the final log.html).

Now, the approach does come with one caveat: import hooks need to be setup prior to importing a module, code imported before setting it up won't be traced -- it's one of the reasons why it's recommended to use robocorp.tasks to do the launching instead of bolting the logging manually as it makes sure things happen in the proper order.

The second caveat is related to the object __repr__. The framework is quite keen on getting the representation from objects at various times, so, if the __repr__ is too slow the execution may be much slower or even worse, if it has side-effects bad things will happen (thankfully most objects do the right thing as a __repr__ with side effects is a bug and the program would misbehave on debuggers too).

The third caveat is that right now it needs to be told what needs to be traced (by default full logging is available for all code which is considered user code and libraries need to be manually specified to be logged when it's called directly from a function which is considered user code -- this may change in the future, but it's the current state of affairs).

The final caveat is that right now it'll only trace what's happening in the main thread. Other threads won't appear in the log.

Well, I guess this post is already a bit bigger than I planned so I'll stop here. For those interested in testing or reporting enhancements/bugs see: https://github.com/robocorp/robo/.

Enjoy!

Fabio Zadroznyhttp://www.blogger.com/profile/04202246218394712738noreply@blogger.com0tag:blogger.com,1999:blog-8550962.post-30159004850972453372022-08-11T06:46:00.000-07:002022-08-11T06:46:30.937-07:00PyDev debugger: Going from async to sync to async... oh, wait.

 In Python asyncio land it's always a bit of a hassle when you have existing code which runs in sync mode which needs to be retrofitted to run in async, but it's usually doable -- in many cases, slapping async on the top of a bunch of definitions and adding the needed await statements where needed does the trick -- even though it's not always that easy.

Now, unfortunately a debugger has no such option. You see, a debugger needs to work on the boundaries of callbacks which are called from python (i.e.: it will usually do a busy wait from a line event from a callback registered in sys.settrace which is always called as a sync call).

Still, users still want to do some evaluation in the breakpoint context which would await... What now? Classic questions of how to go from async to sync say this is not possible.

This happens because to run something in asynchronous fashion an asyncio loop must be used to run it, but alas, the current loop is paused in the breakpoint and due to how asyncio is implemented in Python the asyncio loop is not reentrant, so, we can't just ask the loop to keep on processing at a certain point -- note that not all loops are equal, so, this is mostly an implementation detail on how CPython has implemented it, but unless we want to monkey-patch many things to make it reentrant, this would be a no-no... also, even if possible, it's not possible in asyncio to force a given coroutine to execute, rather we schedule it and asyncio decides when it'll run afterwards).

My initial naive attempt was just creating a new event loop, but again, CPython gets in the way because 2 event loops can't even coexist in the same thread. Then I thought about recreating the asyncio loop and got a bit further (up to being able to evaluate an asyncio.sleep coroutine), but after checking the asyncio AbstractEventLoop it became clear that the API is just too big to reimplement safely (it's not just about implementing the loop, it's also about implementing network I/O such as getnameinfo, create_connection, etc).

In the end the solution implemented for the debugger is that to support await constructs for evaluation, a new thread is created with a new event loop and that event loop in that new thread will execute the coroutine (with the context of the paused frame passed to that thread for the evaluation).

This is not perfect as there are some cons, for instance, evaluating the code in a thread can mean that some evaluations may not work because some frameworks such as qt consider the UI thread as special and won't work properly, checks for the current thread won't match the thread paused and probably a bunch of other things, but I guess it's a reasonable tradeoff vs not having it at all as it should work in the majority of cases.

Keep an eye open for the next release as it'll be possible to await coroutines in the debugger evaluation and watches ;)

p.s.: For VSCode users this will also be available in debugpy.

Fabio Zadroznyhttp://www.blogger.com/profile/04202246218394712738noreply@blogger.com2tag:blogger.com,1999:blog-8550962.post-77880057632280612842022-03-10T06:13:00.002-08:002022-03-10T06:13:59.040-08:00PyDev 9.3.0 (debugger improvements / last version with Python 2.7 - 3.5 support)

PyDev 9.3.0 is now available.

The main changes in this release are related to the debugger, with improvements such as:

Also noteworthy is that this will be the last release supporting older versions of Python including Python 2.7 up to Python 3.5. Newer releases will only support Python 3.6 onwards.

Fabio Zadroznyhttp://www.blogger.com/profile/04202246218394712738noreply@blogger.com0tag:blogger.com,1999:blog-8550962.post-65334879965579587862021-04-18T04:53:00.000-07:002021-04-18T04:53:35.703-07:00PyDev 8.3.0 (Java 11, Flake 8 , Code-completion LRU, issue on Eclipse 4.19)

PyDev 8.3.0 is now available!

Let me start with some warnings here:

First, PyDev now requires Java 11. I believe that Java 11 is pretty standard nowadays and the latest Eclipse also requires Java 11 (if you absolutely need Java 8, please keep using PyDev 8.2.0 -- or earlier -- indefinitely, otherwise, if you are still using Java 8, please upgrade to Java 11 -- or higher).

Second, Eclipse 2021-03 (4.19) is broken and cannot be used with any version of PyDev due to https://bugs.eclipse.org/bugs/show_bug.cgi?id=571990, so, if you use PyDev, please keep to Eclipse 4.18 (or get a newer if available) -- the latest version of PyDev warns about this, older versions will not complain but some features will not function properly, so, please skip on using Eclipse 4.19 if you use PyDev.

Now, on to the goodies ;)

On the linters front, the configurations for the linters can now be saved to the project or user settings and flake8 has an UI for configuration which is much more flexible, allowing to change the severity of any error.

A new option which allows all comments to be added to a single indent was added (and this is now the default).

The code-completion and quick fixes which rely on automatically adding some import will now cache the selection so that if a given token is imported that selection is saved and when asked again it'll be reused (so, for instance, if you just resolved Optional to be typing.Optional, that'll be the first choice the next time around).

Environment variables are now properly supported in .pydevproject. The expected format is: ${env_var:VAR_NAME}.

Acknowledgements

Thanks to Luis Cabral, who is now helping in the project for doing many of those improvements (and to the Patrons at https://www.patreon.com/fabioz which enabled it to happen).

 Enjoy!

Fabio Zadroznyhttp://www.blogger.com/profile/04202246218394712738noreply@blogger.com0tag:blogger.com,1999:blog-8550962.post-68469003020923043552021-02-27T05:44:00.000-08:002021-02-27T05:44:56.279-08:00PyDev 8.2.0 released (external linters, Flake8, path mappings, ...)

 PyDev 8.2.0 is now available for download.

This release has many improvements for dealing with external linters.

The main ones are the inclusion of support for the Flake8 linter as well as using a single linter call for analyzing a directory, so, that should be much faster now (previously it called external linters once for each file) .

Note: to request code analysis for all the contents below a folder, right-click it and choose PyDev > Code analysis:

Another change is that comments are now added to the line indentation... 

This means that some code as:

 def method(): if True: pass

Will become:

 def method(): # if True: # pass

p.s.: it's possible to revert to the old behavior by changing the preferences at PyDev > Editor > Code Style > Comments. 

Also note that after some feedback, on the next release an option to format such as the code below will also be added (and will probably be made the default):

 def method(): # if True: # pass

Interpreter configuration also got a revamp:

So, it's possible to set a given interpreter to be the default one and if you work with conda, select Choose from Conda to select one of your conda environments and configure it in PyDev.

Path mappings for remote debugging can now (finally) be configured from within PyDev itself, so, changing environment variables is no longer needed for that:

 

Note that Add path mappings template entry may be clicked multiple times to add multiple entries.

That's it... More details may be found at: http://pydev.org.

Hope you enjoy the release 😊


Fabio Zadroznyhttp://www.blogger.com/profile/04202246218394712738noreply@blogger.com5tag:blogger.com,1999:blog-8550962.post-4649754212325958732020-12-08T11:26:00.000-08:002020-12-08T11:26:28.828-08:00PyDev 8.1.0 released (Python 3.9, Code analysis, f-string quick-fixes)

 PyDev 8.1.0 is now available for download.

As a note, I didn't really create a post on 8.0.1, so, I'm covering some of the features in that version in this blog post too! 😊

Some nice things added: 


Note that even if there's no formatting in the string, with this it's possible to write some string as: "Value is {value}" and then use the quick fix just to add the "f" to the front of the string (which is quite handy if you weren't initially planning to create it as an f-string).

Note that it's also possible to change those settings in the interactive console preferences if you want to change them after the initial selection (hint: after the interactive console is created it's possible to use F2 to send a line from the editor for execution to the interactive console for notebook-like evaluation).

Besides this there are many other improvements, such as improved conda activation, rerunning pytest parametrized tests from the PyUnit view, MyPy integration improvements, support for from __future__ import annotations in code analysis and debugger improvements, among many others.. see: https://pydev.org for more details.

Enjoy!



Fabio Zadroznyhttp://www.blogger.com/profile/04202246218394712738noreply@blogger.com7tag:blogger.com,1999:blog-8550962.post-80721153431149287732020-09-15T09:16:00.002-07:002020-09-15T09:18:12.982-07:00PyDev 8.0 released (17 years of PyDev, typing support, MyPy and Debugger)

Wow, PyDev is turning 8.0... the 8.0 version probably doesn't do much justice as it's actually being developed for 17 years already! 😊

I'm actually pretty happy on how things are working around it right now... Some interesting points:

sidenote: it's hard to actually get the real usage stats because telemetry is not baked into PyDev -- there are individual file download counts in Bintray (which generously hosts the PyDev update site) and Sourceforge (which summed currently put it around 32k downloads/month)... anyways, I'm always a bit skeptical of download stats, so, I currently define success based on the number of supporters in Patreon/PayPal and LiClipse licenses (all of which are at an all time high 😊) than the raw number of downloads.
 
sidenote 2: I've never really lived from the PyDev income (I worked many years doing scientific computing in Python and nowadays I'm onto consulting) -- at this point it'd probably be in the Ramen profitable stage, but there's an interesting dichotomy in that if I lived only from it I wouldn't be able to dogfood it as much as I do now as it's a Python IDE done in Java (so, I'm happy in having it as a side hustle), but it's good that it got to a point where I can have additional help to do it (my time is definitely very limited nowadays, compared to 17 years ago when I started working on it).

Ok, on to news on the PyDev 8.0 release...

As with the previous release, this release keeps on improving the support for type hinting and MyPy.

On the MyPy front, besides showing an error it will also show the related notes for a message on the tooltip (which would previously be available only in the output view) and MyPy processes are no longer launched in parallel when using the same cache folder (as this could end up making MyPy write wrong caches which required the cache folder to be manually erased).

In the type inference front there are multiple improvements to take advantage of type hints (such as support for Optional[] in code completion, handle types given as string and following type hints when presenting an option to create a new method in a class).

The debugger had a critical fix on the frame-evaluation mode (the mode which works by adding programmatic breakpoints by manipulating bytecode) which could make it skip breakpoints or even change the behavior of a program in extreme cases.

Besides this, other niceties such as code-completion inside f-strings are now available and this release had many other fixes (see: https://www.pydev.org for more details).

 So, thank you to all PyDev users and here's to the next 17 years, hope you enjoy using it as much as I enjoy making it!

🥂

Fabio Zadroznyhttp://www.blogger.com/profile/04202246218394712738noreply@blogger.com0tag:blogger.com,1999:blog-8550962.post-66138672718875935142020-08-02T10:43:00.001-07:002020-08-02T10:43:29.951-07:00PyDev 7.7.0 released (mypy integration improvements, namespace packages)This release brings multiple improvements for dealing with type hints as well as improvements in the Mypy integration in PyDev:

The MYPYPATH can now be set automatically to the source folders set on PyDev and the --follow-imports flag is set to silent by default (this flag is required because only one file is analyzed at a time in PyDev as failing to do so would end up showing errors for other files).



Personally, I think that with Python 3.8 it's finally possible to actually use Python type hints properly (due to the typing.Protocol -- a.k.a Duck typing -- support), so, more improvements are expected in that front on PyDev itself.

This release also brings many other improvements, such as support for pip-installed namespace packages, debugger fixes, support for parsing with the latest version of Cython and support for the latest PyTest.

There are actually many other minor improvements and bug-fixes in this release too. Check the release notes at http://www.pydev.org/ for more details!

Acknowledgements

Thanks to Luis Cabral, who is now helping in the project for doing many of those improvements (and to the Patrons at https://www.patreon.com/fabioz which enabled it to happen).

Thanks for Microsoft for sponsoring the debugger improvements, which are also available in Python in Visual Studio and the Python Extension for Visual Studio Code.

Enjoy!

--
FabioFabio Zadroznyhttp://www.blogger.com/profile/04202246218394712738noreply@blogger.com0tag:blogger.com,1999:blog-8550962.post-14815532391048418062020-06-11T05:13:00.000-07:002020-06-11T05:13:07.758-07:00PyDev 7.6.0 (Python 3.8 parsing fixes and debugger improvements)PyDev 7.6.0 is now available for download.

This release brings multiple fixes to parsing the Python 3.8 grammar (in particular, dealing with f-strings and iterable unpacking had some corner cases that weren't well supported).

Also, the debugger had a number of improvements, such as:




Besides those, there were a number of other improvements... some noteworthy are support for the latest PyTest, faster PyDev Package Explorer, type inference for type comments with self attributes (i.e.: #: :type self.var: MyClass) and properly recognizing trailing commas on automatic import.

Acknowledgements

Thanks to Luis Cabral, who is now helping in the project for doing many of those improvements (and to the Patrons at https://www.patreon.com/fabioz which enabled it to happen).

Thanks for Microsoft for sponsoring the debugger improvements, which are also available in Python in Visual Studio and the Python Extension for Visual Studio Code.

Enjoy!
--
FabioFabio Zadroznyhttp://www.blogger.com/profile/04202246218394712738noreply@blogger.com4tag:blogger.com,1999:blog-8550962.post-6788435654885053912020-03-18T07:19:00.000-07:002020-03-18T07:19:00.008-07:00How is frame evaluation used in pydevd?First some background in frame evaluation:

Since Python 3.6, CPython has a mechanism which allows clients to override how it evaluates frames. This is done by changing PyThreadState.interp.eval_frame to a different C-function (the default being _PyEval_EvalFrameDefault). See: pydevd_frame_evaluator.pyx#L370 in pydevd (note that Cython is used there).

Note that this affects the Python runtime globally, whereas the regular Python tracing function -- set through sys.settrace() -- affects only the current thread (so, some of the caches for frame evaluation in pydevd are thread-local due to that).

How is this used in the debugger?

Well, the debugger doesn't really want to change how Python code is executed, but, there's another interesting side effect of the frame evaluation: it's possible to change the bytecode of the frame right before it's evaluated and CPython will interpret that bytecode instead of the original bytecode of the frame.

So, this works the following way: the frame evaluation function receives a PyFrameObject*, and at that point, the debugger checks the frame for existing breakpoints, if it has a breakpoint, it'll create a new code object which has a programmatic breakpoint (pydevd_frame_evaluator.pyx#L234) and change PyFrameObject.f_code to point to the new code object (pydevd_frame_evaluator.pyx#L358) -- when it reaches the programmatic breakpoint (pydevd_frame_tracing.py#L34), the regular (trace-based) debugger will kick in at that frame. Until that breakpoint is reached, frames are executed at full speed.

But if it runs at full speed, why is my program still running slower when using pydevd with frame evaluation?

Well, frames are executed at full speed, but, the debugger still adds some overhead at function calls (when it decides whether to add the programmatic breakpoint) and it also needs to add an almost no-op trace (pydevd_frame_evaluator.pyx#L95) function to sys.settrace -- which makes function calls slower too (this is needed because otherwise the debugger is not able to switch to the regular tracing by just changing the frame.f_trace as frame.f_trace is only checked when a tracing function is set for some thread through sys.settrace()). There are also some cases where it can't completely skip tracing for a frame even if it doesn't have a breakpoint (for instance, when it needs to break on caught exceptions or if it's stepping in the debugger).

It's interesting to note that even the regular (tracing) debugger on pydevd can run frames at full speed (it evaluates all frames and if a frame doesn't have a breakpoint the tracing for that frame will be skipped), the difference is that if a frame does have a breakpoint, that frame can run at full speed until it reaches the breakpoint in the frame eval mode, whereas in the regular mode each new line tracing event would need to be individually checked for a breakpoint.

If it just changes the bytecode, why use frame eval at all, can't you just change the bytecode of objects at a custom import hook? (which could have the benefit of avoiding the performance penalty of checking the frame on each new frame call)

There are 2 main reasons for that: the 1st is that breakpoints can change and when they change the frame evaluation would need to be completely shut down and only the tracing debugger would be valid from that point onwards (whereas right now, if breakpoints change, the tracing debugger kicks in for all the frames that were currently running but the frame evaluation debugger can still be used for new frames). The 2nd is that it can be hard to consistently do that if not just before frame evaluation (because user code can also change a method code and there are a number of corner cases to change the bytecode for live objects -- think of a function inside a function or decorated functions).

Note that this means that the debugger could probably get away with something simpler than frame evaluation and could potentially be applicable to other Python implementations (say, a different callback just before the frame is evaluated which allows to change the frame code... unfortunately it can't currently be done through the "call" event received by the trace function set by sys.settrace because at that point the frame is already being evaluated with the current code and at that point, even if it's changed, Python won't pick up that change).

That's it, hope you enjoyed pydevd using frame evaluation for debugging purposes 101 ;)





Fabio Zadroznyhttp://www.blogger.com/profile/04202246218394712738noreply@blogger.com0tag:blogger.com,1999:blog-8550962.post-32264940529533814982020-01-10T10:05:00.000-08:002020-01-10T10:05:13.643-08:00PyDev 7.5.0 Released (Python 3.8 and Cython)
PyDev 7.5.0 is now available for download.

The major changes in this release are Python 3.8 support and improved Cython parsing.

Python 3.8 should've been in 7.4.0 (but because of an oversight on my part during the build it wasn't, so, this release fixes that).

As for the Cyhon AST, Cython is now parsed using Cython itself (so, it needs to be installed and available in the default interpreter for PyDev to be able to parse it). The major issue right now is that the parser is not fault tolerant (this means that for code-completion and code-analysis to kick in the code needs to be syntax-correct, which is a problem when completing for instance variables right after a dot).

Fixing that in Cython seems to be trivial (https://github.com/cython/cython/issues/3303), but I'm still waiting for a signal that it's ok to add that support to make Cython parsing fault-tolerant.

Enjoy!
Fabio Zadroznyhttp://www.blogger.com/profile/04202246218394712738noreply@blogger.com2tag:blogger.com,1999:blog-8550962.post-5278762635100135932019-03-27T06:39:00.000-07:002019-03-27T06:39:01.296-07:00PyDev 7.2.0 releasedPyDev 7.2.0 is now available for download.

This version brings some improvements to the debugger and a fix for when PyDev could not properly find pipenv which could impact some users.

See: http://pydev.org for more details. Fabio Zadroznyhttp://www.blogger.com/profile/04202246218394712738noreply@blogger.com4tag:blogger.com,1999:blog-8550962.post-52412332061306274252018-11-09T05:17:00.003-08:002018-11-09T05:17:42.260-08:00PyDev 7.0 (mypy, black, pipenv, faster debugger)
PyDev 7.0 (actually, PyDev 7.0.3 after some critical bugfixes on 7.0.0) is now available.

Some of the improvements available in this version include:

Mypy may be used as an additional backend for code analysis (see the preferences in the Preferences > PyDev > Editor > Code Analysis > Mypy). It is is similar to the PyLint integration, and will run Mypy whenever a file is saved in the editor.

Black can be used as the code-formatting engine (so, it's now possible to choose between the PyDev formatter, autopep8 or Black).

pipenv may be used for managing virtual environments (so, when creating a new project, clicking to configure an interpreter not listed will present an option to create a new interpreter with pipenv).

Improvements managing interpreters: it's now possible to manage interpreters directly from the editor (so, for instance, doing ctrl+2 pip install django will use pip to install django in the interpreter related to the opened editor -- and it's possible to change pip for conda or pipenv too).

Debugger improvements

The debugger is much faster for Python 3.6 onwards (when cython compiled extensions are available).

The performance improvement on the debugger is due to the using the frame eval mode for breakpoints again.

To give some history, that mode was previously disabled on pydevd because it had some issues which made function calls much slower (even though line stepping was zero overhead), but in the end, that overhead could make the performance worse than just using the plain tracing mode. Also, it had a big memory leak and in the cases where the tracing mode had to be reenabled both modes would be active at the same time and performance would suffer quite a bit.

-- all of those should be fixed and it should now perform better or at least on par with the tracing mode with cython in all scenarios... if you like numbers, see: https://github.com/fabioz/PyDev.Debugger/blob/master/tests_python/performance_check.py#L193 for some benchmarks.

p.s.: the improvements in the Debugger were sponsored by Microsoft, as the PyDev Debugger is used as the core of ptvsd, the Python debugger package used by Python in Visual Studio and the Python Extension for Visual Studio Code.

 
Fabio Zadroznyhttp://www.blogger.com/profile/04202246218394712738noreply@blogger.com0tag:blogger.com,1999:blog-8550962.post-23015635116528344812018-09-03T11:26:00.002-07:002018-09-03T11:26:38.588-07:00PyDev 6.5.0 (#region code folding)
PyDev 6.5.0 is now available for download.

There are some nice features and fixes available in this release:
See: http://www.pydev.org for more details.
Fabio Zadroznyhttp://www.blogger.com/profile/04202246218394712738noreply@blogger.com2tag:blogger.com,1999:blog-8550962.post-66625634062975165742018-08-13T12:19:00.000-07:002018-08-13T12:19:14.864-07:00Profiling pytest startup
I'm a fan of pytest (http://www.pytest.org), yet, it seems that the startup time for running tests locally in the app I'm working on is slowly ramping up, so, I decided to do a profile to see if there was anything I could do to improve that.

The first thing I did was creating a simple test and launching it from the PyDev (http://www.pydev.org/) profile view -- it enables any launch done in PyDev to show its performance profile on PyVmMonitor (https://www.pyvmmonitor.com).

Note that this is an integration test that is starting up a big application, so, the total time just to startup all the fixtures which make the application live and shut down the fixtures is 15 seconds (quite a lot IMHO).

The first thing I noticed looking the profile is that 14% of that time seems to be creating a session temp dir:


After investigating a bit more it seems that there is a problem in the way the fixture used make_numbered_dir (it was passing a unicode when it should be a str on Python 2) and make_numbered_dir had an issue where big paths were not removed.

So, pytest always visited my old files every time I launched any test and that accounted for 1-2 seconds (I reported this particular error in: https://github.com/pytest-dev/pytest/issues/3810).

Ok, down from 15 to 13 seconds after manually removing old files with big paths and using the proper API with str on Py2.

Now, doing a new profile with that change has shown another pytest-related slowdown doing rewrites of test cases. 


This is because of a feature of pytest where it'll rewrite test files to provide a prettier stack trace when there's some assertion failure.

So, I passed --assert=plain to pytest and got 3 more seconds (from 13 down to 10) -- it seems all imports are a bit faster with the import rewrite disabled, so, I got an overall improvement there, not only in that specific part of the code (probably not nice on CI where I want to have more info, but seems like a nice plus locally, where I run many tests manually as I think the saved time for those runs will definitely be worth it even with less info when some assertion fails).

Now, with that disabled the next culprit seems to be getting its plugins to load:



But alas, it uses setuptools and I know from previous experience that it's very hard to improve that (it is very greedy in the way it handles loading metadata, so, stay away unless you're ok in wasting a lot of time on your imports) and the remainder of the time seems to be spread out importing many modules -- the app already tries to load things as lazy as possible... I think I'll be able to improve on that to delay some imports, but Python libraries are really hard to fix as everyone imports everything in the top of the module.

Well, I guess going from 15 s to 10 s with just a few changes is already an improvement in my case for an integrated tests which starts up the whole app (although it could certainly be better...) and I think I'll still be able to trim some of that time doing some more imports lazily -- although that's no longer really pytest-related, so, that's it for this post ;)

Fabio Zadroznyhttp://www.blogger.com/profile/04202246218394712738noreply@blogger.com0tag:blogger.com,1999:blog-8550962.post-28957707504421070042018-07-06T04:37:00.000-07:002018-07-06T04:37:18.585-07:00PyDev 6.4.3 (code formatter standalone, debugger improvements and f-strings handling)The latest version of PyDev is now out...

Major changes in this release include:

1. Being able to use the PyDev code formatter as a standalone tool.

To use it it's possible to install it as pip install pydevf (the command line is provided as a python library which will call the actual formatter from PyDev -- see the README at https://github.com/fabioz/PyDev.Formatter for more details on how to use it).

The target of the PyDev formatter is trying to keep as close to the original structure of the code while fixing many common issues (so, it won't try to indent based on line width but will fix many common issues such as a space after a comma, space at start of comment, blank lines among methods and classes, etc).

2. Improvements to the debugger, such as:

As a note, the debugger improvements have been sponsored by Microsoft, which is in the process of using the PyDev Debugger as the core of ptvsd, the Python debugger package used by Python in Visual Studio and the Python Extension for Visual Studio Code (note that it's still marked as experimental there as it's in the process of being integrated into ptvsd).

 It's really nice to see pydevd being used in more IDEs in the Python world! 😉

Besides those, there are some bugfixes in handling f-strings and sending the contents of the current line to the console (through F2).

Also, a new major version of LiClipse (5.0) is now also available (see: http://www.liclipse.com/download.html for how to get it). It includes the latest PyDev and a new major Eclipse release (4.8 - Photon).Fabio Zadroznyhttp://www.blogger.com/profile/04202246218394712738noreply@blogger.com0tag:blogger.com,1999:blog-8550962.post-69719507114887468312018-05-12T08:39:00.000-07:002018-05-12T08:39:18.671-07:00Howto launch and debug in VSCode using the debug adapter protocol (part 2)
Ok, after the basic infracstructure, the next thing to do is actually launch some program without worrying about the debugger, so, we'll just run a program without being in debug mode to completion, show its output and terminate it when requested.

To launch a program, our debug adapter must treat the 'LaunchRequest' message and actually run the program (bear in mind that we'll just launch it without doing any debugging at this point).

The first point then is how to actually launch it. We provided options for the debugger to be launched with different console arguments specifying where to launch it (either just showing output to the debug console, using the integrated terminal or using an external terminal).

So, let's start with just showing output in the debug console.

Launching it should be simple: just generate a command line while treating the 'LaunchRequest' message, but then, based on the console specified some things may be different...

Let's start handling just showing the output on the debug console.

To do that we have launch the command properly redirecting the output to pipes (for python it's something as subprocess.Popen([sys.executable, '-u', file_to_run], stdout=subprocess.PIPE, stderr=subprocess.PIPE) and then create threads which will read that output to provide it back to vscode (so, when output is obtained, an OutputEvent must be given).

Also, create another thread so that when the process finishes, a TerminatedEvent is given (in python, just do a popen.wait() in a thread and when complete send the TerminatedEvent -- you may want to synchronize to make sure other threads related to output have finished before doing that).

At this point, we can run something and should be able to see anything printed both to stdout and stderr and when the process finishes, VSCode itself acknowledges that and closes the related controls.

Great! On to launching in the integrated terminal!

So, to launch in the terminal we have to first actually check if the client does support running in the terminal... in the InitializeRequest, if it is supported, we should've received in the arguments "supportsRunInTerminalRequest": True (if it doesn't, in my case I just fall back to the debug console).

This also becomes a little bit trickier because at this point we're the ones doing the request (RunInTerminalRequest) and the client should send a response (RunInTerminalResponse). So, on to it: when the client launches, create a RunInTerminalRequest with the proper kind ('internal' or 'external') and wait for the response.

At this point, the processId may not actually be available after launching in that mode (the RunInTerminalResponse processId is optional), which means that if we didn't really create a debugger (just a simple run), we're blind... we could do another program to launch it and return the pid to be able to notify that it was stopped and to kill it when needed, but this seems a bit overkill for me and I couldn't find any info on the proper behavior here, so, I decided that when the user chooses that mode with 'noDebug' I'll simply notify that the debug session is finished for the adapter with a TerminatedEvent (and the user can see the output and Ctrl+C it in the actual terminal).

As a note the 'noDebug' option is added behind the scenes by VSCode depending on whether the user has chosen to do a debug or run for the selected launch (so, it shouldn't be a part of the declared configuration in the extension).

Now, thinking a bit more about it, there's a caveat: when launching with the redirection to the debug console, we should treat sending to stdin too (we don't want to create a process he can't do any communication with later on).

To do that in 'noDebug' should be simple... when we receive an 'EvaluateRequest', we'll send it to stdin (when actually in debug mode we probably have to check the current debugger state to determine if we want to do an evaluation or send to stdin -- i.e.: if we are stopped in a breakpoint we may want to evaluate and not send to stdin).

As a note, after playing with it more I renamed the "console" option to "terminal" with options "none", "internal", "external" as I think that's a better representation of what's expected.

So, that's it for part 2: we're launching a program and redirecting the output as requested by the user (albeit without actually debugging it for now).

The related code may be seen at: https://github.com/fabioz/python_debug_adapter_tutorial/tree/master/part2
Fabio Zadroznyhttp://www.blogger.com/profile/04202246218394712738noreply@blogger.com0tag:blogger.com,1999:blog-8550962.post-73868374143827458262018-05-09T05:11:00.000-07:002018-05-09T05:11:14.632-07:00Howto launch and debug in VSCode using the debug adapter protocol (part 1)
This is a walkthrough with the steps I'm taking to add support to launch and debug a Python script in PyDev for VSCode (note that I'm writing as I'm learning).

The debugger protocol is the protocol used in VSCode to talk to debuggers and handle launching in general (the naming may be a bit weird as the same protocol is used for regular launches and debugging, but apparently the team first did the debugging and then launching came as an afterthought just passing a separate flag during the launching of the program to specify that no debugging should be done -- and not the other way around as I think would be more common).

There is an overview of the protocol at https://code.visualstudio.com/docs/extensionAPI/api-debugging and https://code.visualstudio.com/docs/extensionAPI/extension-points provides more information on what an extension must use to provide a debugger.

There's also a json schema which specifies the format of the messages sent back and forth in the debugger at https://raw.githubusercontent.com/Microsoft/vscode-debugadapter-node/master/debugProtocol.json.

But, after reading all that, it seems that many things are still cloudy on my head on how to actually go on about it and what should be done concretely to implement a debugger in VSCode.

So, my approach is getting the debugProtocol.json, converting it to a structure with Python classes (so that each message that can be sent has a Python representation) and playing a bit doing a debugger stub, just to exercise a dummy debugger talking to VSCode (but without actually doing anything).

It's interesting to note that the first thing to do is actually making the debugger available in the extension. For that, I've used the json below in package.json (as a note, my package.json is actually generated from Python code, so, the structure below is actually a Python dict which is later converted to json, not the actual json -- if you're doing a VSCode extension, I highly recommend generating your package.json and parts of the code that are related and not doing it all by hand... this way it's possible to see it in small pieces and auto generate command ids and the related code, etc... initially I haven't done so in PyDev, but as the declarative files grow, it becomes harder to follow and make changes while keeping the code and declaration in sync):

  { 'type': 'PyDev', 'label': 'PyDev (Python)', 'languages': ['python'], 'adapterExecutableCommand': 'pydev.start.debugger', # Note: adapterExecutableCommand will be replaced by a different API (right now still in proposal mode). # See: https://code.visualstudio.com/updates/v1\_20#\_debug-api # See: https://github.com/Microsoft/vscode/blob/7636a7d6f7d2749833f783e94fd3d48d6a1791cb/src/vs/vscode.proposed.d.ts#L388-L395 'enableBreakpointsFor': { 'languageIds': ['python', 'html'], }, 'configurationAttributes': { 'launch': { 'required': [ 'mainModule' ], 'properties': { 'mainModule': { 'type': 'string', 'description': 'The .py file that should be debugged.', }, 'args': { 'type': 'string', 'description': 'The command line arguments passed to the program.' }, "cwd": { "type": "string", "description": "The working directory of the program.", "default": "${workspaceFolder}" }, "console": { "type": "string", "enum": [ "integratedTerminal", "externalTerminal" ], "enumDescriptions": [ "VS Code integrated terminal.", "External terminal that can be configured in user settings." ], "description": "The specified console to launch the program.", "default": "integratedTerminal" }, } } }, "configurationSnippets": [ { "label": "PyDev: Launch Python Program", "description": "Add a new configuration for launching a python program with the PyDev debugger.", "body": { "type": "PyDev", "name": "PyDev Debug (Launch)", "request": "launch", "cwd": "^\"\\${workspaceFolder}\"", "console": "integratedTerminal", "mainModule": "", "args": "" } }, ] } 


So, although there are many things there, initially we just need to make adapterExecutableCommand return the command to be executed (you could also create a standalone executable or something to run along with a supported vm -- such as mono, but there's nothing for python there, so, the adapterExecutableCommand is probably the best approach for a python debugger).

In my case it's something as:

  commands.registerCommand('pydev.start.debugger', () => { return { command: "C:/bin/python27/python.exe", // paths initially hardcoded for simplicity args: ["X:/vscode-pydev/vscode-pydev/src/debug_adapter/debugger_protocol.py"] } }); 

The configurationSnippets section provides the snippets which allow VSCode to autogenerate the configuration for the user and the configurationAttributes are actually custom for each implementation (so, those will probably need more tweaking going forward).

Another interesting point is that when VSCode launches the debug adapter it'll use stdin and stdout to communicate with the adapter (this makes some things a bit quirky to develop the debugger because you have to (initially) resort to printing debug information to a file to be able to check what's happening, although on the bright side, you won't have to worry about having a firewall at that point).

Also, don't forget to flush after writing messages to stdout.

Now, on to the protocol itself... I created something which would read from stdin and then redirect that to a file to see what's coming (after digging up things a bit more I found an issue in the VSCode tracker referencing: https://github.com/buggerjs/bugger-v8-client/blob/master/PROTOCOL.md which details that a bit more -- although not all that's there is actually applicable to the VSCode debugger). 

The first message that arrives from stdin is:

  Content-Length: 312\r\n \r\n { "arguments": { "adapterID": "PyDev", "clientID": "vscode", "clientName": "Visual Studio Code", "columnsStartAt1": true, "linesStartAt1": true, "locale": "en-us", "pathFormat": "path", "supportsRunInTerminalRequest": true, "supportsVariablePaging": true, "supportsVariableType": true }, "command": "initialize", "seq": 1, "type": "request" }

-- this is the InitializeRequest in the json schema.

So, it seems a regular http-protocol, sending json contents as the actual content... so, in response to that, the debug adapter should do its initialization and return the capabilities it has -- something as:

  { "seq": 1, "request_seq": 1, "command": "initialize",  "body": {"supportsConfigurationDoneRequest": true,  "supportsConditionalBreakpoints": true},  "type": "response", "success": true } 

-- this is the InitializeResponse in the json schema.

and then send and event saying that it has initialized properly:

  {"type": "event", "event": "initialized", "seq": 2} 

-- this is the InitializedEvent in the json schema.

Note that those are all http responses, so, the Content-Length: $size\r\n\r\n needs to be passed on each request (note that each message sent or received has a seq, which is a number that should be raised whenever a new message is sent -- the seq is raised independently on the server and on the client and responses should reference the seq from the request in request_seq). 

Afterwards, the client (VSCode) sends the actual launch request (which should be based on the configurationAttributes previously configured). In this case:

  { "arguments": { "__sessionId": "474aa497-0a90-4b30-8cc6-edf3bebbe703", "args": "", "console": "integratedTerminal", "cwd": "X:\\vscode_example", "name": "PyDev Debug (Launch)", "program": "X:/vscode_example/robots.py", "request": "launch", "type": "PyDev" }, "command": "launch", "seq": 2, "type": "request" } 

-- this is the launch request in the json schema (it comes with additional attributes the user specified in the launch... each extension needs to tweak the actual parameters to its use case).

At this point, it becomes clear that this is really just an adapter: we're expected to actually launch the process and provide the communication layer to the actual debugger (so, the debugger doesn't really have to be changed -- although on some cases that may be benefical if possible... for instance, the debugger could already give output on the variable frames as json so that the message doesn't need to be decoded and recoded in a new format). 

Also, the stdin and stdout may be in use (because VSCode uses it to communicate to the debug adapter), so, it may be hard to reuse this process to be the actual debugger process (for instance, launch could then make main proceed to launch the program in this process if the debugger could directly handle the debug protocol, but then if clients managed to write to the 'real' stdin/stdout handles, the debugger would stop working). 

The launch request just requires a notification that the program was launched, so, the response would be a launch response with an empty body (or if there was some error -- say, the file to be launched no longer exists -- a "message" could be set and "success" could be False). 

  { "request_seq": 2, "command": "launch", "body": {}, "type": "response", "success": true } 

-- this is the LaunchResponse in the json schema.

Ok, now, at this point I already have a structure which parses the json and creates python instances for each protocol message (and vice-versa), so, instead of specifying each message in its full format, I'll just reference it from the identifier on the schema instead of the actual json. 

After the launch request, I get a ConfigurationDoneRequest and return the proper ConfigurationDoneResponse and for the ThreadsRequest a ThreadsResponse.
At this point, the debugger will sit idle, waiting for actions from the user or events from our debug adapter (if more than one thread was returned in the ThreadsResponse, the threads will appear in the CallStack).

Now, the only thing different at this point is that the debug controls will appear, so, a pause or stop can be activated from the UI.

Pressing stop will send us a DisconnectRequest (for which a DisconnectResponse should be sent as an acknowledgement) and the pause will send a PauseRequest (which requires us to send back a PauseResponse -- and after a thread is actually paused, a StoppedEvent should be sent). 

Ok, this is the end of part 1 (we have something which can be started and later stopped -- without actually doing anything, so, pretty much a mock debugger)... This actually took me 2 full days to implement (most of the work trying to wrap my head around how things worked and generating python code from the json schema -- I tried some libraries and none of them worked as I needed, so, I rolled my own here). 

My main gripe was the lack of a better documentation on how to approach doing a debug protocol from scratch and how it should work. For instance, it took me quite a while to find a reference to launching from the adapterExecutableCommand where I could construct a command line -- initial references I found pointed only to using an executable or a supported runtime such as mono -- some things I still don't know how to handle such as how to actually provide output based on the console type the user expects: (i.e.: integratedTerminalexternalTerminal) -- anyways, hope to get to that in the upcoming parts... 

The final code I have at this point (which also contains the code generator I did) may be seen at:

https://github.com/fabioz/python_debug_adapter_tutorial/tree/master/part1

Part 2 should get us to the point of actually launching a process...
Fabio Zadroznyhttp://www.blogger.com/profile/04202246218394712738noreply@blogger.com0tag:blogger.com,1999:blog-8550962.post-21191864112384677882018-03-21T11:35:00.004-07:002018-03-21T11:35:59.688-07:00PyDev 6.3.2: support for .pyi filesPyDev 6.3.2 is now available for download.

The main change in this release is that PyDev will now consider .pyi (stub) files when doing type inference, although there's still a shortcoming: the .pyi file must be in the same directory where the typed .py file is and it's still not possible to use it to get type inference for modules which are compiled (for instance PyQt).

I hope to address that in the next release (initially I wanted to delay this release to add full support for .pyi files, but there was a critical bug opening the preferences page for code completion, so, it really couldn't be delayed more, nevertheless, the current support is already useful for users using .pyi files along .py files).

Also, code completion had improvements for discovering whether some call is for a bound or unbound method and performance improvements (through caching of some intermediary results during code completion).

Enjoy!
Fabio Zadroznyhttp://www.blogger.com/profile/04202246218394712738noreply@blogger.com0tag:blogger.com,1999:blog-8550962.post-59327005534494225872018-03-01T03:48:00.001-08:002018-03-01T03:48:34.729-08:00PyDev 6.3.1 (implicit namespace packages and Visual Studio Code support)
The major change in this release is that PyDev now recognizes that folders no longer require __init__.py files to be considered a package (PEP 420).

Although this is only available for Python 3.3 onwards, PyDev will now always display valid folders under the PYTHONPATH as if they were packages.

There were also some improvements, such as recognizing that dlls may have a postfix (so that dlls for multiple versions of Python may be available in the same folder) and a number of bugfixes (see: http://www.pydev.org has more details).

Besides those, a good amount of work in this release was refactoring the codebase so that PyDev could also be available as a language server for Python, to enable it to be used on Visual Studio Code (http://www.pydev.org/vscode), so, Visual Studio Code users can now also use many of the nice features on PyDev ;)

Enjoy!
Fabio Zadroznyhttp://www.blogger.com/profile/04202246218394712738noreply@blogger.com0tag:blogger.com,1999:blog-8550962.post-38735896551323040222018-02-19T03:10:00.000-08:002018-02-19T03:10:38.282-08:00Python with PyDev on Visual Studio Code
PyDev can now be used for Python development on Visual Studio Code!

The first release already provides features such as code analysis, code completion, go to definition, symbols for the workspace and editor, code formatting, find references, quick fixes and more (see http://www.pydev.org/vscode/ for details).

All features have a strong focus on speed and have been shaped by the usage on PyDev over the last 14 years, so, I believe it's already pretty nice to use... there are still some big things to integrate (such as the PyDev debugger), but those should come on shortly.

The requisites are having java 8 (or higher) installed on the system (if it doesn't find it automatically the java home location may need to be specified in the settings -- http://www.pydev.org/vscode/ has more details) and Python 2.6 or newer.

By default it should pick the python executable available on the PATH, but it's possible to specify a different python executable through the settings on VSCode (see http://www.pydev.org/vscode/settings.html for details).

Below, I want to share some of the things that are unique in PyDev and are now available for VSCode users:
Now, the extension itself is not currently open source as PyDev... my target is making the best Python development environment around and all earnings will go towards that (as a note, all improvements done to PyDev itself will still be open source, so, most earnings from PyDev on VSCode will go toward open source development).

Enjoy!
Fabio Zadroznyhttp://www.blogger.com/profile/04202246218394712738noreply@blogger.com3tag:blogger.com,1999:blog-8550962.post-60215314103341484562017-12-04T03:07:00.000-08:002017-12-04T06:20:14.269-08:00Creating extension to profile Python with PyVmMonitor from Visual Studio CodeOk, so, the target here is doing a simple extension with Visual Studio Code which will help in profiling the current module using PyVmMonitor (http://www.pyvmmonitor.com/).

The extension will provide a command which should open a few options for the user on how he wants to do the profile (with yappi, cProfile or start without profiling but connected with the live sampling view).

I went with https://code.visualstudio.com/docs/extensions/yocode to bootstrap the extension, which gives a template with a command to run then renamed a bunch of things (such as the extension name, description, command name, etc).

Next was finding a way to ask the user for the options (to ask how the profile should be started). Searching for it revealed https://tstringer.github.io/nodejs/javascript/vscode/2015/12/14/input-and-output-vscode-ext.html, so, I went with creating the needed constants and going with vscode.window.showQuickPick (experimenting, undefined is returned if the user cancel the action, so, that needs to be taken into account too).

Now, after the user chooses how to start PyVmMonitor, the idea would be making any launch actually start in the chosen profile mode (which is how it works in PyDev).

After investigating a bit, I couldn't find out how to intercept an existing launch to modify the command line to add the needed parameters for profiling with PyVmMonitor, so, this integration will be a bit more limited than the one in PyDev as it will simply create a new terminal and call PyVmMonitor asking it to profile the currently opened module...

In the other integrations, it was done as a setting where the user selected that it wanted to profile any python launch from a given point onward as a toggle and then intercepted launches changing the command line given, so, for instance, it could intercept a unittest launch too, but in this case, it seems that there's currently no way to do that -- or some ineptitude on my part finding an actual API to do it ;)

Now, searching on the VSCode Python plugin, I found a "function execInTerminal", so, I based the launching in it (but not using its settings as I don't want to add a dependency on it for now, so, I just call `python` -- if that's wrong, as it opens a shell, the user is free to cancel that and correct the command line to use the appropriate python interpreter or change it as needed later on).

Ok, wrapping up: put the initial version of the code on https://github.com/fabioz/vscode-pyvmmonitor. Following https://code.visualstudio.com/docs/extensions/publish-extension did work out, so, there's a "Profile Python with PyVmMonitor" extension now ;).

Some notes I took during the process related to things I stumbled on or found awkard:
Fabio Zadroznyhttp://www.blogger.com/profile/04202246218394712738noreply@blogger.com0tag:blogger.com,1999:blog-8550962.post-33215526406806568992017-11-29T16:23:00.000-08:002017-11-29T16:23:26.342-08:00PyDev 6.2.0: Interactive Console word wrapping, pytest hyperlinking
PyDev 6.2.0 is mostly a bugfix release, although it does bring some features to the table to such as adding the possibility of activating word-wrapping in the console and support for code-completion using the Python 3.6 variable typing.

Another interesting change is that pytest filenames are properly hyperlinked in the console (until now PyDev resorted to mocking some functions of pytest so that when it printed exceptions it used the default Python traceback format -- now that's no longer done).

See: http://www.pydev.org for complete details.

p.s.: Thank you to all PyDev supporters -- https://www.brainwy.com/supporters/PyDev-- which enable PyDev to keep on being improved!

p.s.: LiClipse 4.4.0 already bundles PyDev 6.2.0, see: http://www.liclipse.com/download.html for download links.
Fabio Zadroznyhttp://www.blogger.com/profile/04202246218394712738noreply@blogger.com0