| Commit message (Collapse) | Author | Age |
... | |
| |
| |
| |
| |
| | |
Implementation is similar to kakoune: we store the entries into
a register.
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
* Implement `margin` calculation for uncommenting
* Move `margin` calculation to `find_line_comment`
* Fix comment bug with multiple selections on a line
* Fix `find_line_comment` test for new return type
* Generate a single vec of lines for comment toggle
`toggle_line_comments` collects the lines covered by all selections into
a `Vec`, skipping duplicates. `find_line_comment` now returns the lines
to operate on, instead of returning the lines to skip.
* Fix test for `find_line_comment`
* Reserve length of `to_change` instead of `lines`
The length of `lines` includes blank lines which will be skipped, and as
such do not need space for a change reserved for them. `to_change`
includes only the lines which will be changed.
* Use `token.chars().count()` for token char length
* Create `changes` with capacity instead of reserving
* Remove unnecessary clones in `test_find_line_comment`
* Add test case for 0 margin comments
* Add comments explaining `find_line_comment`
|
| | |
|
| | |
|
| |
| |
| |
| | |
Also fix a bunch of bugs related to it.
|
|\| |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Fixed warning:
```
warning: the item `fmt` is imported redundantly
--> helix-core/src/syntax.rs:98:9
|
16 | fmt,
| --- the item `fmt` is already imported here
...
98 | use std::fmt;
| ^^^^^^^^
|
```
|
| | |
|
| | |
|
| |
| |
| |
| | |
Still needs to be done, but should be part of a separate PR.
|
| | |
|
| | |
|
| | |
|
| | |
|
| |
| |
| |
| |
| | |
This also fixes a bug with `Selection::normalize()`, that could
result in an out-of-bounds primary index.
|
| |
| |
| |
| | |
Apparently I accidentally deleted that behavior in the cleanup.
|
| | |
|
| |
| |
| |
| |
| | |
This had a bunch of knock-on effects that were buggy, such as bracket
match highlighting.
|
|\| |
|
| |
| |
| |
| | |
Size hint is enough.
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
* Made toggle_comments language dependent
* Fixed Test Cases
* Added clippy suggestion
* Small Fixes
* Clippy Suggestion
Co-authored-by: Cor <prive@corpeters.nl>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
* Added option to provide a custom config file to the lsp.
* Simplified lsp loading routine with anyhow
* Moved config to language.toml
* Fixed test case
* Cargo fmt
* Revert now-useless changes
* Renamed custom_config to config
Co-authored-by: Cor <prive@corpeters.nl>
|
| |
| |
| |
| |
| |
| | |
Also tweaked some of the existing behavior that seemed inconsistent
and/or buggy. It's mostly identical, just a few corner cases are
different.
|
|\| |
|
| | |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
* rewrote Rust highlights.scm
* wip
* wip
* wip
* wip
* fixed type highlighting
* wip
* rewrite again
* moved operators
* missing newline
* missing newline
* update book
* fix constructor highlighting
* fix constructor highlighting
* fix const highlighting
* better constructor highlighting
* remove dup, bug was my locals.scm file
* fixed docs
* merge
* fixed for highlighting
* add yield
* remove yield
* added yield back
* fixed yield highlighting
* unecessary
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Bumps [unicode-segmentation](https://github.com/unicode-rs/unicode-segmentation) from 1.7.1 to 1.8.0.
- [Release notes](https://github.com/unicode-rs/unicode-segmentation/releases)
- [Commits](https://github.com/unicode-rs/unicode-segmentation/compare/1.7.1...v1.8.0)
---
updated-dependencies:
- dependency-name: unicode-segmentation
dependency-type: direct:production
update-type: version-update:semver-minor
...
Signed-off-by: dependabot[bot] <support@github.com>
|
| | |
|
| |
| |
| |
| |
| | |
In particular, this wraps the annoying logic involved in keeping the
cursor width to 1 grapheme.
|
|\| |
|
| | |
|
| |
| |
| |
| |
| |
| | |
For example when the cursor is _on_ the `'` in `'word'`, the cursor
wouldn't move because the search for a matching pair started _from_ the
position of the cursor and simply found itself.
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
* Add textobjects for word
* Add textobjects for surround characters
* Apply clippy lints
* Remove ThisWordPrevBound in favor of PrevWordEnd
It's the same as PrevWordEnd except for taking the current char
into account, so use a "flag" to capture that usecase
* Add tests for PrevWordEnd movement
* Remove ThisWord* movements
They did not preserve anchor positions and were only used
for textobject boundary search anyway so replace them with
simple position finding functions
* Rewrite tests of word textobject
* Add tests for surround textobject
* Add textobject docs
* Refactor textobject word position functions
* Apply clippy lints on textobject
* Fix overflow error with textobjects
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
* reloading functionality
* fn with_newline_eof()
* fmt
* wip
* wip
* wip
* wip
* moved to core, added simd feature for encoding_rs
* wip
* rm
* .gitignore
* wip
* local wip
* wip
* wip
* no features
* wip
* nit
* remove simd
* doc
* clippy
* clippy
* address comments
* add indentation & line ending change
|
| | |
|
| | |
|
| | |
|
| | |
|
| | |
|
|\| |
|
| | |
|
| |
| |
| |
| | |
One of them was a lot more obvious than I thought.
|
| |
| |
| |
| |
| | |
I'm not sure how to address them, because they look like they
might be bugs, and code is involved. Will poke the relevant people.
|
| |
| |
| |
| | |
Still a bunch more warnings to fix in core, but it's a start.
|
| | |
|
| | |
|
| | |
|
| |
| |
| |
| |
| | |
This way they do less work, are more specific to what we actually
need, and they compose.
|
| | |
|
|/ |
|