Compare commits

...

109 Commits

Author SHA1 Message Date
Lunny Xiao
afa7f22dd8 Add changelog for v1.13.1 (#14172)
* Add changelog for v1.13.1

* Update CHANGELOG.md

Co-authored-by: John Olheiser <john.olheiser@gmail.com>

* Update CHANGELOG.md

* Update CHANGELOG.md

Co-authored-by: John Olheiser <john.olheiser@gmail.com>

* Update CHANGELOG.md

Co-authored-by: John Olheiser <john.olheiser@gmail.com>

* Update CHANGELOG.md

Co-authored-by: John Olheiser <john.olheiser@gmail.com>
Co-authored-by: zeripath <art27@cantab.net>
Co-authored-by: techknowlogick <techknowlogick@gitea.io>
2020-12-28 12:36:22 -05:00
Lunny Xiao
182be90655 Fix bug of link query order on markdown render (#14156) (#14171)
* Fix bug of link query order on markdown render

* Fix bluemonday bug and fix one wrong test

Co-authored-by: 6543 <6543@obermui.de>

Co-authored-by: 6543 <6543@obermui.de>
2020-12-28 12:08:55 -05:00
6543
4a738a8f16 Migration: drop too long repo topics (#14152) (#14155)
* Migration: drop to long repo topics

* Update modules/migrations/gitea_uploader.go
2020-12-26 21:57:06 -05:00
zeripath
206b66a184 Fix escaping issue in diff (#14154)
Ensure that linecontent is escaped before passing to template.HTML

Signed-off-by: Andrew Thornton <art27@cantab.net>
2020-12-26 22:15:42 +00:00
Daniil Pankratov
205be63bc1 Fix creation OAuth2 auth source from CLI. (#14146)
Fix #8356
2020-12-25 20:02:52 +08:00
zeripath
bf1441b1e1 Ensure that search term and page are not lost on adoption page-turn (#14133) (#14143)
Backport #14133

Fix #14111

Signed-off-by: Andrew Thornton <art27@cantab.net>
2020-12-24 21:54:15 +00:00
6543
fae18bdac0 more test case for STORAGE_TYPE overrides (and fixes) (#14096) (#14104)
Signed-off-by: 胡玮文 <huww98@outlook.com>

Co-authored-by: 胡玮文 <huww98@outlook.com>
2020-12-22 09:13:57 +02:00
6543
661e3e2bdc Fix storage config implementation (#14091) (#14095)
The design is very flexible, but not implemented correctly.
This commit fixes several issues:
* Costom storage type stated in https://docs.gitea.io/en-us/config-cheat-sheet/#storage-storage
  not working
* [storage.attachments], [storage.minio] section not respected

Signed-off-by: 胡玮文 <huww98@outlook.com>

Co-authored-by: 胡玮文 <huww98@outlook.com>
2020-12-22 00:56:18 +02:00
techknowlogick
70038719bf dep: update crypto. info: https://golangtutorial.dev/news/fix-in-crypto-package/ (#14078) 2020-12-21 14:02:40 +08:00
silverwind
55d7e53d99 Fix panic in BasicAuthDecode (#14046) (#14048)
* Fix panic in BasicAuthDecode

If the string does not contain ":" that function would run into an
`index out of range [1] with length 1` error. prevent that.

* Update BasicAuthDecode()

Co-authored-by: 6543 <6543@obermui.de>

Co-authored-by: 6543 <6543@obermui.de>
Co-authored-by: zeripath <art27@cantab.net>
2020-12-19 00:19:43 +08:00
6543
96d41287e5 [API] GetCombinedCommitStatusByRef always return json & swagger doc fixes (#14047)
* Fix swagger docs

* always return json
2020-12-18 13:38:47 +00:00
6543
df11075389 HotFix: Hide private partisipation in Orgs (#13994) (#14031)
* HotFix: Hide private partisipation in Orgs

Co-authored-by: zeripath <art27@cantab.net>
2020-12-17 22:32:24 +01:00
zeripath
b8a2cd9f40 Always wait for the cmd to finish (#14006) (#14039)
Backport #14006

After cancelling the context we still need to wait for the
command to finish otherwise zombie processes may occur

Fix #13987
2020-12-17 21:06:51 +01:00
mrsdizzie
4f296f7436 Don't use simpleMDE editor on mobile devices for 1.13 (#14029)
* Don't use simpleMDE editor on mobile devices

simpleMDE doesn't work properly on mobile devices -- We've replaced it with the slightly more working easyMDE in 1.14 but since that change can't be backported to 1.13 we will just disable the editor on mobile here.

* make isMobile function per code review -- disable simpleMDE for code review and replies

* Fix issue with plain text and wiki

Co-authored-by: silverwind <me@silverwind.io>
2020-12-17 17:39:12 +01:00
6543
78b9ef3586 Add emoji in label to project boards (#13978) (#14021)
* Update view.tmpl

Added rendering of emoji to project label

* Add RenderEmojiPlain to the title and remove has-emoji

Co-authored-by: zeripath <art27@cantab.net>

Co-authored-by: Rakshith Ravi <rakshith.ravi@gmx.com>
Co-authored-by: zeripath <art27@cantab.net>
2020-12-16 15:15:58 -05:00
Cirno the Strongest
90dfe445c2 Send webhook when tag is removed via Web UI (#14015) (#14019)
* Send webhook when tag is removed via Web UI

* Stray code (cherry picked from commit 53308de0bf)

* Fix for 1.13
2020-12-16 18:24:02 +01:00
Jimmy Praet
a728d1e046 always use headCommitID for review comment diff (#14011) 2020-12-16 18:50:30 +08:00
zeripath
7f85728cf9 Trim the branch prefix from action.GetBranch (#13981) (#13986)
Backport #13981

 #13882 has revealed that the refname of an action is actually only a
refname pattern and necessarily a branch. For examplem pushing to
refs/heads/master will result in action with refname refs/heads/master
but pushing to master will result in a refname master.

The simplest solution to providing a fix here is to trim the prefix
therefore this PR proposes this.

Signed-off-by: Andrew Thornton <art27@cantab.net>
Co-authored-by: a1012112796 <1012112796@qq.com>

Co-authored-by: a1012112796 <1012112796@qq.com>
2020-12-14 15:35:40 -05:00
zeripath
d2b308ae35 Ensure template renderer is available before storage handler (#13982)
`ctx.Error` requires that templates are available for this to
render the error page otherwise there will be a panic at this
time.

This was fixed in #13164 but was not completely backported.

Fix #13971

Signed-off-by: Andrew Thornton <art27@cantab.net>
2020-12-14 20:45:33 +08:00
zeripath
8e8e8ee150 Whenever the password is updated ensure that the hash algorithm is too (#13966) (#13967)
Backport #13966

`user.HashPassword` may potentially - and in fact now likely does - change
the `passwd_hash_algo` therefore whenever the `passwd` is updated, this
also needs to be updated.

Fix #13832

Thanks @fblaese for the hint

Signed-off-by: Andrew Thornton <art27@cantab.net>
2020-12-13 01:01:44 +01:00
6543
05ee88e576 Enforce setting HEAD in wiki to master (#13950) (#13961)
The default branch in wikis must be master - therefore forcibly set the HEAD
to master.

Fix #13846

Signed-off-by: Andrew Thornton <art27@cantab.net>

Co-authored-by: 6543 <6543@obermui.de>

Co-authored-by: zeripath <art27@cantab.net>
2020-12-12 17:21:26 +00:00
Lunny Xiao
0d7cb2323f Fix feishu webhook caused by API changed (#13937) (#13938)
fix #13858
2020-12-11 16:11:32 +01:00
Lunny Xiao
5cdffc2b0c log error when login failed (#13903) (#13913)
Co-authored-by: techknowlogick <techknowlogick@gitea.io>

Co-authored-by: techknowlogick <techknowlogick@gitea.io>
Co-authored-by: 6543 <6543@obermui.de>
2020-12-09 10:37:15 -05:00
Jimmy Praet
a0101c61a4 Fix Quote Reply button on review diff (#13830) (#13898)
Backport of #13830 

Co-authored-by: 6543 <6543@obermui.de>
2020-12-08 22:12:35 +00:00
a1012112796
c0b1197a64 Fix Pull Merge when tag with same name as base branch exist (#13882) (#13896)
fix dst refspec error in 'Push back to upstream' when base branch have
same name with a tag.

fix #13851
Signed-off-by: a1012112796 <1012112796@qq.com>

Co-authored-by: 6543 <6543@obermui.de>
Co-authored-by: zeripath <art27@cantab.net>
2020-12-08 12:58:44 +01:00
6543
e39ed0b1d9 [API] return original URL of Repositories (#13885) (#13886) 2020-12-08 05:59:19 +01:00
manuelluis
cb24cbc1fc Fix branch/tag notifications in mirror sync (#13855) (#13862)
Co-authored-by: Gitea <gitea@fake.local>
Co-authored-by: 6543 <6543@obermui.de>
2020-12-05 23:30:28 -05:00
silverwind
584d01cf2c Fix mermaid chart size (#13865) 2020-12-05 22:13:31 -05:00
mrsdizzie
798fdeae45 Fix crash in short link processor (#13839) (#13841)
Fixes #13819
2020-12-04 04:08:48 +01:00
silverwind
87997cccbb Update font stack to bootstrap's latest (#13834) (#13837)
Backport #13834
2020-12-04 02:21:34 +01:00
John Olheiser
0d5111c5c3 Make sure email recipients can see issue (#13820) (#13827)
* Initial pass

* Remove over-op

Signed-off-by: jolheiser <john.olheiser@gmail.com>
2020-12-03 22:37:33 +01:00
Jimmy Praet
10fff12da4 Reply button is not removed when deleting a code review comment (#13824)
Backport #13774
2020-12-03 20:26:47 +00:00
zeripath
0d43a2a069 When reinitialising DBConfig reset the database use flags (#13796) (#13811)
Backport #13796

One perennial issue is users running the install page,
changing the database dialect and then suffering with issues

This PR simply resets all of the database.Use flags on
initDBConfig. This should prevent this issue from occuring.

Fix #13788
Fix #5480

Signed-off-by: Andrew Thornton <art27@cantab.net>

Co-authored-by: techknowlogick <techknowlogick@gitea.io>

Co-authored-by: techknowlogick <techknowlogick@gitea.io>
2020-12-03 11:13:19 +01:00
6543
8396b792f8 Migrations: Use Process Manager to create own Context (#13793) 2020-12-02 15:11:11 -06:00
techknowlogick
d551152582 1.13.0 Changelog (#13782)
Co-authored-by: 6543 <6543@obermui.de>
2020-12-02 06:54:26 +02:00
techknowlogick
f677ed628b set git-core paths in snap (#13711) (#13781)
Signed-off-by: artivis <deray.jeremie@gmail.com>

Co-authored-by: techknowlogick <techknowlogick@gitea.io>

Co-authored-by: Jeremie Deray <deray.jeremie@gmail.com>
2020-12-01 19:36:11 -05:00
6543
07629bd55c Add Allow-/Block-List for Migrate & Mirrors (#13610) (#13776)
* add black list and white list support for migrating repositories

* specify log message

* use blocklist/allowlist

* allways use lowercase to match url

* Apply allow/block

* Settings: use existing "migrations" section

* convert domains lower case

* dont store unused value

* Block private addresses for migration by default

* use proposed-upstream func to detect private IP addr

* add own error for blocked migration, add tests, imprufe api

* fix test

* fix-if-localhost-is-ipv4

* rename error & error message

* rename setting options

* Apply suggestions from code review

Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
Co-authored-by: zeripath <art27@cantab.net>
Co-authored-by: techknowlogick <techknowlogick@gitea.io>

Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
Co-authored-by: zeripath <art27@cantab.net>
Co-authored-by: techknowlogick <techknowlogick@gitea.io>
2020-12-01 19:28:34 -05:00
silverwind
d475b656b1 Set RUN_MODE prod by default (#13765) (#13767)
* Set RUN_MODE prod by default (#13765)

I think it's a bad default to have "dev" as the default run mode which
enables debugging and now also disables HTTP caching. It's better to
just default to a value suitable for general deployments.

Co-authored-by: techknowlogick <techknowlogick@gitea.io>

* flip default in checkRunMode

Co-authored-by: techknowlogick <techknowlogick@gitea.io>
2020-12-01 09:55:38 +08:00
silverwind
6e14773c44 Fix bogus http requests on diffs (#13760) (#13761)
The .blob-excerpt elements don't have these data attributes in some
cases resulting in bogus http request when expanding a diff and clicking
into the expanded area. This prevents those.

Should backport to 1.13.

Fixes: https://github.com/go-gitea/gitea/issues/13759
2020-11-30 14:51:48 -05:00
a1012112796
25421f08c0 ui: show 'owner' tag for real owner (#13689) (#13743)
* ui: show 'owner' tag for real owner

Signed-off-by: a1012112796 <1012112796@qq.com>

* Update custom/conf/app.example.ini

* simplify logic

fix logic
fix a small bug about original author

* remove system manager tag

Co-authored-by: techknowlogick <techknowlogick@gitea.io>
Co-authored-by: Lauris BH <lauris@nix.lv>
2020-11-29 14:50:58 +02:00
zeripath
bdb491e764 Push HEAD instead of master when initialising repositories (#13719) (#13740)
* Push HEAD instead of master when initialising repositories

It is possible on modern gits to change the initial branch to something other than
master. This breaks initialising repositories because we assume that the initial
branch is going to be master unless specifically changed.

This PR simply bypasses this issue by pushing the HEAD rather than the master branch.

Signed-off-by: Andrew Thornton <art27@cantab.net>

* Update modules/repository/init.go

Co-authored-by: mrsdizzie <info@mrsdizzie.com>

Co-authored-by: mrsdizzie <info@mrsdizzie.com>
Co-authored-by: techknowlogick <techknowlogick@gitea.io>

Co-authored-by: mrsdizzie <info@mrsdizzie.com>
Co-authored-by: techknowlogick <techknowlogick@gitea.io>
2020-11-28 16:59:32 -05:00
John Olheiser
a82c7d4323 Increment skip to avoid infini-loop (#13703) (#13727)
Signed-off-by: jolheiser <john.olheiser@gmail.com>

Co-authored-by: Lauris BH <lauris@nix.lv>

Co-authored-by: Lauris BH <lauris@nix.lv>
2020-11-28 04:55:53 +00:00
silverwind
7ec1c13f53 CSS table fixes (#13693)
Backport https://github.com/go-gitea/gitea/pull/13692 to 1.13.
2020-11-24 19:45:24 +02:00
6543
4c9d00cf78 finaly fix gitlab migration with subdir 2.0 (#13646) (#13678)
* final fix 2.0?

* ignore Approvals for pulls if not found

* CI.restart()

Co-authored-by: Lauris BH <lauris@nix.lv>

Co-authored-by: Lauris BH <lauris@nix.lv>
2020-11-23 16:40:58 +00:00
6543
33431fcbd3 Validate email before inserting/updating (#13475) (#13666)
* Add email validity check (#13475)

* Improve error feedback for duplicate deploy keys

Instead of a generic HTTP 500 error page, a flash message is rendered
with the deploy key page template so inform the user that a key with the
intended title already exists.

* API returns 422 error when key with name exists

* Add email validity checking

Add email validity checking for the following routes:
[Web interface]
1. User registration
2. User creation by admin
3. Adding an email through user settings
[API]
1. POST /admin/users
2. PATCH /admin/users/:username
3. POST /user/emails

* Add further tests

* Add signup email tests

* Add email validity check for linking existing account

* Address PR comments

* Remove unneeded DB session

* Move email check to updateUser

Co-authored-by: zeripath <art27@cantab.net>
Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
Co-authored-by: techknowlogick <techknowlogick@gitea.io>

* skip email validation on empty string (#13627)

- move validation into its own function
- use a session for UpdateUserSetting

* rm TODO for backport

Co-authored-by: Chris Shyi <chrisshyi13@gmail.com>
Co-authored-by: zeripath <art27@cantab.net>
Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
Co-authored-by: techknowlogick <techknowlogick@gitea.io>
2020-11-22 12:31:35 -05:00
6543
f2a3a9117e * Handle incomplete diff files properly (#13668)
The code for parsing diff hunks has a bug whereby a very long line in a very long diff would not be completely read leading to an unexpected character.

  This PR ensures that the line is completely cleared

* Also allow git max line length <4096

* Add test case

Fix #13602

Signed-off-by: Andrew Thornton <art27@cantab.net>

Co-authored-by: Andrew Thornton <art27@cantab.net>
2020-11-22 16:51:39 +00:00
Karl Heinz Marbaise
ef7a52826d Fix issue/pull request list assignee filter (#13647) (#13651)
* Fixes #13641 - Filtering in Pull Request kept all the time.
 - The URL contains all the time the assignee in cases
   where once a type has been selected.

Signed-off-by: Karl Heinz Marbaise <kama@soebes.de>

* Followup Fixes #13641 - Filtering in Pull Request kept all the time.
 - The URL contains all the time the assignee in cases
   where once a type has been selected.
 - The same behaviour was observed issues viewed via milestones.

Signed-off-by: Karl Heinz Marbaise <kama@soebes.de>
2020-11-19 16:58:35 -06:00
techknowlogick
e0d28e2026 finaly fix gitlab migration with subdir (#13629) (#13633)
* finaly fix #13535

* add logging

Co-authored-by: 6543 <6543@obermui.de>
2020-11-19 11:20:12 -05:00
6543
2f6dad2e34 API: Fix GetQueryBeforeSince (#13561) 2020-11-19 02:21:21 +00:00
Lunny Xiao
bcde51f4c2 Fix a bug when check if owner is active (#13613) 2020-11-18 11:59:24 +02:00
6543
ed3a4cd103 Migration: Gitlab: Support Subdirectory (#13563) (#13591)
Co-authored-by: techknowlogick <techknowlogick@gitea.io>

Co-authored-by: techknowlogick <techknowlogick@gitea.io>
2020-11-17 15:01:33 +08:00
silverwind
c6ab79ee3c Fix Fomatic Build (#13596)
Port of #13593 to 1.13
2020-11-16 18:01:05 -05:00
6543
48fca01b0d [API] Only Return Json (#13511) (#13565)
Backport #13511 

Co-authored-by: zeripath <art27@cantab.net>
2020-11-15 16:29:16 +00:00
techknowlogick
9a8e02ce30 missing quotes in default value slice (#13550) (#13557)
Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
Co-authored-by: techknowlogick <techknowlogick@gitea.io>

Co-authored-by: Patrick Aljord <patcito@gmail.com>
Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
2020-11-14 14:12:01 +02:00
Lunny Xiao
159a4db30a Add missed sync branch/tag webhook (#13538) (#13556)
Co-authored-by: Lauris BH <lauris@nix.lv>
Co-authored-by: techknowlogick <techknowlogick@gitea.io>

Co-authored-by: Lauris BH <lauris@nix.lv>
Co-authored-by: techknowlogick <techknowlogick@gitea.io>
2020-11-13 22:04:58 -05:00
mrsdizzie
b4d18dae19 Use existing analyzer module for language detection for highlighting (#13522) (#13551)
* Use existing analyzer module for language detction for highlighting

Thanks @lafriks for pointing out we can reuse existing code for more reliable language detection here.

* Update modules/highlight/highlight.go

Co-authored-by: Lauris BH <lauris@nix.lv>

Co-authored-by: zeripath <art27@cantab.net>
Co-authored-by: Lauris BH <lauris@nix.lv>
Co-authored-by: techknowlogick <techknowlogick@gitea.io>

Co-authored-by: zeripath <art27@cantab.net>
Co-authored-by: Lauris BH <lauris@nix.lv>
Co-authored-by: techknowlogick <techknowlogick@gitea.io>
2020-11-13 18:05:51 -05:00
Lunny Xiao
ee0097f97d Prevent git operations for inactive users (#13527) (#13536)
* prevent git operations for inactive users

* Some fixes

* Deny push to the repositories which's owner is inactive

* deny operations also when user is ProhibitLogin

Co-authored-by: zeripath <art27@cantab.net>

Co-authored-by: zeripath <art27@cantab.net>
2020-11-13 09:28:32 +08:00
6543
122f8f86d5 Disallow urlencoded new lines in git protocol paths if there is a port (#13521) (#13524)
Signed-off-by: Andrew Thornton <art27@cantab.net>

Co-authored-by: zeripath <art27@cantab.net>
2020-11-11 23:47:42 +02:00
6543
1f72656892 Migration not fail on notmigrated reactions (#13507)
* Refactor: dedub code

* skip Reactions with Invalid ID
2020-11-11 11:01:27 +00:00
techknowlogick
5a32224a2c 1.13.0-rc2 changelog (#13503) 2020-11-10 16:09:05 -05:00
6543
8049de82f9 Prevent panic on git blame by limiting lines to 4096 bytes at most (#13491)
Fix #12440
Closes #13192

Signed-off-by: Andrew Thornton <art27@cantab.net>

Co-authored-by: Andrew Thornton <art27@cantab.net>
2020-11-10 08:00:20 +00:00
6543
797cb38a4a 2nd attempt at re-request APIMergePullRequest (#13468) (#13490)
Signed-off-by: Andrew Thornton <art27@cantab.net>

Co-authored-by: zeripath <art27@cantab.net>
2020-11-09 21:55:48 -05:00
6543
ae4955999e Fix panic bug in handling multiple references in commit (#13486) (#13487)
* Fix panic bug in handling multiple references in commit (#13486)

The issue lay in determining the position of matches on a second run round
a commit message in FindAllIssueReferences.

Fix #13483

Signed-off-by: Andrew Thornton <art27@cantab.net>

* CI.restart()

Co-authored-by: Andrew Thornton <art27@cantab.net>
2020-11-09 21:16:34 -05:00
techknowlogick
1e446bb176 use registry mirror for docker-in-docker (#13438) (#13445)
Co-authored-by: Lauris BH <lauris@nix.lv>

Co-authored-by: Lauris BH <lauris@nix.lv>
2020-11-06 20:42:56 +00:00
6543
9aa580ce0e Replies to outdated code comments should also be outdated (#13217) (#13433)
* When replying to an outdated comment it should not appear on the files page

This happened because the comment took the latest commitID as its base instead of the
reviewID that it was replying to.

There was also no way of creating an already outdated comment - and a
reply to a review on an outdated line should be outdated.

Signed-off-by: Andrew Thornton <art27@cantab.net>

* fix test

Signed-off-by: Andrew Thornton <art27@cantab.net>

Co-authored-by: techknowlogick <techknowlogick@gitea.io>

Co-authored-by: zeripath <art27@cantab.net>
Co-authored-by: techknowlogick <techknowlogick@gitea.io>
2020-11-05 15:14:55 -05:00
mrsdizzie
3421e4b756 Alternative fix for HTML diff entity split (#13425) (#13427)
* Alternative fix for HTML diff entity split

This commit both reverts PR #13357 and uses the exiting implementation alredy used for spans to fix the same issue. That PR duplicates most of logic that is already present elsewhere and still was failing for some cases. This should be simpler as it uses the existing logic that already works for <span>s being split apart.

Added both test cases as well.

* Update gitdiff_test.go

* fmt

* entity can have uppercase letter, also add detailed comment per @zeripath
2020-11-05 11:54:03 -05:00
Wim
6086a9061b Add missing full names when DEFAULT_SHOW_FULL_NAME is enabled (#13424) 2020-11-04 09:51:07 -05:00
6543
4ad10ac015 Vendor: mvdan.cc/xurls v2.1.0 -> v2.2.0 (#13407) 2020-11-02 20:56:51 -05:00
Cirno the Strongest
cbdbae2925 Fix 'add code comment' button being invisible all the time (#13389) (#13402)
* Fix 'add code comment' button being invisible all the time

* Fix off-center icon

* Remove old JS hover hack

* Show on full-line hover

Co-authored-by: techknowlogick <techknowlogick@gitea.io>

(cherry picked from commit 7f7e7f3ca4)
2020-11-02 18:09:29 -05:00
Cirno the Strongest
350c10fe5b Fix reactions on code comments (#13390) (#13401)
Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>

(cherry picked from commit 06268dcf53)
2020-11-02 22:05:41 +08:00
Lunny Xiao
02259a0f3a Storage configuration support [storage] (#13314) (#13379)
* Fix minio bug

* Add tests for storage configuration

* Change the Seek flag to keep compitable minio?

* Fix test when first-byte-pos of all ranges is greater than the resource length

Co-authored-by: techknowlogick <techknowlogick@gitea.io>

Co-authored-by: techknowlogick <techknowlogick@gitea.io>
2020-11-01 23:12:50 +08:00
Lunny Xiao
c3e752ae29 Fix typo (#13380) (#13382) 2020-11-01 15:14:39 +08:00
zeripath
3f94dffca1 When creating line diffs do not split within an html entity (#13357) (#13375)
Backport #13357

* When creating line diffs do not split within an html entity

Fix #13342

Signed-off-by: Andrew Thornton <art27@cantab.net>

* Add test case

Signed-off-by: Andrew Thornton <art27@cantab.net>

* improve test

Signed-off-by: Andrew Thornton <art27@cantab.net>

Co-authored-by: techknowlogick <techknowlogick@gitea.io>

Co-authored-by: techknowlogick <techknowlogick@gitea.io>
2020-10-31 21:30:23 +02:00
silverwind
52b4b984a5 Comment Header fixes (#13356) (#13374)
Apply more flexboxes on comment header and remove float hacks. Needs
1.13 backport.

Fixes: https://github.com/go-gitea/gitea/issues/13316

Co-authored-by: Lauris BH <lauris@nix.lv>

Co-authored-by: Lauris BH <lauris@nix.lv>
2020-10-31 13:25:10 -04:00
zeripath
77a2d75639 Fix scrolling to resolved comment anchors (#13343) (#13371)
* Fix scrolling to resolved comment anchors

As described on discord, when the window.location.hash refers to a
resolved comment then the scroll to functionality does not work.

This PR fixes this.

Signed-off-by: Andrew Thornton <art27@cantab.net>

* Apply suggestions from code review

Co-authored-by: silverwind <me@silverwind.io>
Co-authored-by: techknowlogick <techknowlogick@gitea.io>
2020-10-31 13:51:51 +02:00
zeripath
79d9cda993 Fix links to repositories in /user/setting/repos (#13360) (#13362)
* Fix links to repositories in /user/setting/repos

somehow the links gained a spurious $ in the links.

Signed-off-by: Andrew Thornton <art27@cantab.net>

* And fix #13359

Signed-off-by: Andrew Thornton <art27@cantab.net>
2020-10-30 17:51:52 +00:00
zeripath
02edb9df52 Migrations should not fail for comment reactions (#13352) (#13355)
An extension to #13444 - where we now ensure that comment reaction failures do not cause migrations failure

Signed-off-by: Andrew Thornton <art27@cantab.net>
2020-10-29 20:05:15 -04:00
zeripath
f825e2a568 And there is another one ... (#13350)
Signed-off-by: Andrew Thornton <art27@cantab.net>
2020-10-29 20:48:58 +08:00
techknowlogick
8e38bd154f Remove obsolete change of email on profile page (#13341) (#13347)
* Remove obsolete change of email on profile page

The change email on the account profile page is out-of-date
and unnecessary.

Changing email should be done using the account page.

Fix #13336

Signed-off-by: Andrew Thornton <art27@cantab.net>

Co-authored-by: zeripath <art27@cantab.net>
Co-authored-by: Lauris BH <lauris@nix.lv>
2020-10-29 02:44:45 -04:00
techknowlogick
0b0456310f Migration failure during reaction migration from gitea (#13344) (#13345)
* Migrating reactions is just not that important

A failure during migrating reactions should not cause failure of
migration.

Signed-off-by: Andrew Thornton <art27@cantab.net>

* When checking issue reactions check the correct permission

Signed-off-by: Andrew Thornton <art27@cantab.net>

Co-authored-by: techknowlogick <techknowlogick@gitea.io>

Co-authored-by: zeripath <art27@cantab.net>
2020-10-28 23:57:15 -04:00
JustAnotherArchivist
639c737648 Add deprecation notice for webhook payload's secret field (#13329) 2020-10-28 23:14:26 -04:00
zeripath
adfe13f1a2 Add migrated pulls to pull request task queue (#13331) (#13334)
* Add migrated pulls to pull request task queue

Fix #13321

Signed-off-by: Andrew Thornton <art27@cantab.net>

* Improve error reports

Signed-off-by: Andrew Thornton <art27@cantab.net>

Co-authored-by: techknowlogick <techknowlogick@gitea.io>

Co-authored-by: techknowlogick <techknowlogick@gitea.io>
2020-10-27 19:44:21 -04:00
M4RKUS-11111
47cb9b3de2 Deny wrong pull (#13308) (#13326)
* Deny wrong pull (#13308)

* Deny wrong pull

* Update routers/api/v1/repo/pull.go

Co-authored-by: zeripath <art27@cantab.net>

Co-authored-by: Markus <git+markus@obermui.de>
Co-authored-by: zeripath <art27@cantab.net>

* CI.restart()

Co-authored-by: Markus <git+markus@obermui.de>
Co-authored-by: zeripath <art27@cantab.net>
Co-authored-by: 6543 <6543@obermui.de>
2020-10-27 16:26:07 -04:00
Paweł Bogusławski
28133a801a Avatar autogeneration fixed (#13282)
This mod fixes problem with initial avatar autogeneration and
avatar autogneration after deleting previous avatar.

Related: https://github.com/go-gitea/gitea/issues/13159
Fixes: 80a6b0f5bc
Author-Change-Id: IB#1105243
2020-10-26 15:56:14 +02:00
zeripath
3d272b899d Ensure topics added using the API are added to the repository (#13285) (#13302)
Partial Backport #13285

Fix #12426

Signed-off-by: Andrew Thornton <art27@cantab.net>

Co-authored-by: Lauris BH <lauris@nix.lv>
2020-10-26 14:14:40 +02:00
zeripath
5178aa2130 Attempt to handle unready PR in tests (#13305) (#13310)
Backport #13305

Signed-off-by: Andrew Thornton <art27@cantab.net>

Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
Co-authored-by: Lauris BH <lauris@nix.lv>
2020-10-26 19:13:39 +08:00
zeripath
5da8a84328 Fix Storage mapping (#13297) (#13307)
* Fix Storage mapping (#13297)

Backport #13297

This PR fixes several bugs in setting storage

* The default STORAGE_TYPE should be the provided type.
* The Storage config should be passed in to NewStorage as a pointer - otherwise the Mappable interface function MapTo will not be found
* There was a bug in the MapTo function.

Fix #13286

Signed-off-by: Andrew Thornton <art27@cantab.net>

* add missing changes from backport #13164

Signed-off-by: Andrew Thornton <art27@cantab.net>

Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
2020-10-25 21:40:46 -04:00
zeripath
d795bfc964 When the git ref is unable to be found return broken pr (#13218) (#13303)
Backport #13218

Fix #13216

Signed-off-by: Andrew Thornton <art27@cantab.net>
2020-10-25 19:10:09 -04:00
Lunny Xiao
151daf73a6 Fix bug isEnd detection on getIssues/getPullRequests (#13299) (#13301) 2020-10-25 10:13:26 +02:00
techknowlogick
e177728a82 Store task errors following migrations and display them (#13246) (#13287)
* Store task errors following migrations and display them

When migrate tasks fail store the error in the task table
and ensure that they show on the status page.

Fix #13242

Signed-off-by: Andrew Thornton <art27@cantab.net>

* Update web_src/js/index.js

* Hide the failed first

Signed-off-by: Andrew Thornton <art27@cantab.net>

Co-authored-by: techknowlogick <techknowlogick@gitea.io>

Co-authored-by: zeripath <art27@cantab.net>
2020-10-24 13:02:36 +08:00
John Olheiser
074f7abd95 Remove PAM from auth dropdown when unavailable (#13276) (#13281)
Signed-off-by: jolheiser <john.olheiser@gmail.com>
2020-10-23 12:00:20 -04:00
6543
39412c61bf Migrations: Gitea should not fail just because of no apiConfig return (#13229) (#13273)
* close #13227

* log it

👍

Co-authored-by: zeripath <art27@cantab.net>

Co-authored-by: zeripath <art27@cantab.net>
Co-authored-by: techknowlogick <techknowlogick@gitea.io>

Co-authored-by: zeripath <art27@cantab.net>
Co-authored-by: techknowlogick <techknowlogick@gitea.io>
2020-10-23 19:11:40 +08:00
silverwind
ad4dde1d49 More arc-green fixes (#13247) (#13253)
- Fix various white borders
- Tweak basic button style to have more contrast
- Add more contrast to hover styles
- Invert Matrix webhook icon

May backport to 1.13.

Co-authored-by: zeripath <art27@cantab.net>

Co-authored-by: zeripath <art27@cantab.net>
2020-10-22 18:55:44 -04:00
zeripath
d51c574350 Fix initial commit page & binary munching problem (#13249) (#13258)
Backport #13249

* Fix initial commit page

Unfortunately as a result of properly fixing ParsePatch the hack that
used git show <initial_commit_id> to get the diff for this failed.

This PR fixes this using the "super-secret" empty tree ref to make the
diff against.

Signed-off-by: Andrew Thornton <art27@cantab.net>

* Also fix #13248

Signed-off-by: Andrew Thornton <art27@cantab.net>

* Update services/gitdiff/gitdiff.go

Co-authored-by: 6543 <6543@obermui.de>

Co-authored-by: 6543 <6543@obermui.de>
2020-10-22 13:59:01 +01:00
mrsdizzie
52d333f084 Add better error checking for inline html diff code (#13251)
* Fix error in diff html rendering (#13191)

* Fix error in diff html rendering

Was missing an optional whitespace check in regex. Also noticed a rare case where diff.Type == Equal would be empty and thus get a newline attached. Fixed that too.

Fixes #13177

* Update services/gitdiff/gitdiff.go

Co-authored-by: zeripath <art27@cantab.net>

* Update gitdiff_test.go

* fmt

Co-authored-by: zeripath <art27@cantab.net>

* Add better error checking for inline html diff code (#13239)

* Add better error checking for inline html diff code

A better fix for #13191 which cleans up this code a bit and adds basic checking which should avoid writing broken HTML in future situations.

* Update gitdiff_test.go

* better regex

Co-authored-by: zeripath <art27@cantab.net>
2020-10-21 22:37:50 -04:00
zeripath
198e57bc37 Return the full rejection message and errors in flash errors (#13221) (#13237)
* Return the full rejection message and errors in flash errors (#13221)


Signed-off-by: Andrew Thornton <art27@cantab.net>

* Update routers/repo/pull.go

Co-authored-by: John Olheiser <john.olheiser@gmail.com>

Co-authored-by: John Olheiser <john.olheiser@gmail.com>
2020-10-21 14:54:19 -04:00
6543
ba97c0e98b Update heatmap fixtures to restore tests (#13224) (#13225)
`the hotfix day`
2020-10-20 17:39:37 -05:00
techknowlogick
c47f9a0a70 Various arc-green fixes (#13214) (#13215)
- Style search dropdown
- Fix radio buttons and tweak checkboxes
- Add styling for error form elements
- Make borders brighter and focus more apparent
- Adjust comment box border color to match

Fixes: https://github.com/go-gitea/gitea/pull/12491

Co-authored-by: silverwind <me@silverwind.io>
2020-10-20 02:10:05 -04:00
techknowlogick
e97466b840 Fix size and clickable area on file table back link (#13205) (#13207)
Fixes: https://github.com/go-gitea/gitea/issues/13038

Should backport to 1.13.

Co-authored-by: silverwind <me@silverwind.io>
2020-10-19 09:56:17 +03:00
a1012112796
35d0045ce2 Update CHANGELOG.md (#13200) (#13202)
Co-authored-by: zeripath <art27@cantab.net>

Co-authored-by: zeripath <art27@cantab.net>
2020-10-18 13:13:57 -04:00
techknowlogick
aca13f941c When handling errors in storageHandler check underlying error (#13178) (#13193)
Unfortunately there was a mistake in #13164 which fails to handle
os.PathError wrapping an os.ErrNotExist

Signed-off-by: Andrew Thornton <art27@cantab.net>

Co-authored-by: techknowlogick <techknowlogick@gitea.io>
Co-authored-by: zeripath <art27@cantab.net>
2020-10-18 15:52:03 +01:00
赵智超
1ba4a7ec16 fix a small nit (#13187)
Signed-off-by: a1012112796 <1012112796@qq.com>
2020-10-17 23:38:34 +08:00
zeripath
e9649b39ac Fix diff skipping lines (#13155)
* Fix diff skipping lines

Backport #13154

ParsePatch previously just skipped all lines that start with "+++ " or "--- "
and makes no attempt to see these lines in context.

This PR rewrites ParsePatch to pay attention to context and position
within a patch, ensuring that --- and +++ are only skipped if
appropriate.

This PR also fixes several issues with incomplete files.

Fix https://codeberg.org/Codeberg/Community/issues/308
Fix #13153

Signed-off-by: Andrew Thornton <art27@cantab.net>

* Add testcase

Signed-off-by: Andrew Thornton <art27@cantab.net>

* fix comment

* simplify error handling

Signed-off-by: Andrew Thornton <art27@cantab.net>

* never return io.EOF

Signed-off-by: Andrew Thornton <art27@cantab.net>

Co-authored-by: techknowlogick <techknowlogick@gitea.io>
2020-10-16 21:39:35 -04:00
6543
ea95a9fa15 Update go-version v1.2.3 -> v1.2.4 (#13169) (#13172)
Co-authored-by: zeripath <art27@cantab.net>

Co-authored-by: zeripath <art27@cantab.net>
2020-10-16 12:23:52 -04:00
6543
2ec50b9514 Show outdated comments in pull request (#13148) (#13162)
Co-authored-by: zeripath <art27@cantab.net>

Co-authored-by: Iván Valdés <iv@a.ki>
Co-authored-by: zeripath <art27@cantab.net>
2020-10-15 21:46:56 -04:00
Lauris BH
f587dc69bb Fix Italian language file parsing error (#13156) 2020-10-15 19:57:17 +08:00
Matti R
d655cfe968 align mysql service settings in drone 2020-10-14 16:57:12 -04:00
Lauris BH
89b1b662b3 Add back only missing translation for Latvian language (#13144)
* Add back only missing translation for Latvian language

* Backport German translations
2020-10-14 16:54:56 -04:00
Matti R
cf86abaf3c run mysql container with same conditions as other services 2020-10-14 16:45:38 -04:00
199 changed files with 3483 additions and 1209 deletions

View File

@@ -113,18 +113,6 @@ services:
environment:
MYSQL_ALLOW_EMPTY_PASSWORD: yes
MYSQL_DATABASE: test
GOPROXY: off
TAGS: bindata sqlite sqlite_unlock_notify
GITLAB_READ_TOKEN:
from_secret: gitlab_read_token
depends_on:
- build
when:
branch:
- master
event:
- push
- pull_request
- name: mysql8
pull: default
@@ -678,7 +666,6 @@ steps:
event:
exclude:
- pull_request
---
kind: pipeline
name: docker-linux-arm64-dry-run
@@ -708,6 +695,9 @@ steps:
tags: linux-arm64
build_args:
- GOPROXY=off
environment:
PLUGIN_MIRROR:
from_secret: plugin_mirror
when:
event:
- pull_request
@@ -752,11 +742,13 @@ steps:
from_secret: docker_password
username:
from_secret: docker_username
environment:
PLUGIN_MIRROR:
from_secret: plugin_mirror
when:
event:
exclude:
- pull_request
---
kind: pipeline
name: docker-manifest

View File

@@ -4,14 +4,53 @@ This changelog goes through all the changes that have been made in each release
without substantial changes to our git log; to see the highlights of what has
been added to each release, please refer to the [blog](https://blog.gitea.io).
## [1.13.0-RC1](https://github.com/go-gitea/gitea/releases/tag/v1.13.0-RC1) - 2020-10-14
## [1.13.1](https://github.com/go-gitea/gitea/releases/tag/v1.13.1) - 2020-12-29
* SECURITY
* Hide private participation in Orgs (#13994) (#14031)
* Fix escaping issue in diff (#14153) (#14154)
* BUGFIXES
* Fix bug of link query order on markdown render (#14156) (#14171)
* Drop long repo topics during migration (#14152) (#14155)
* Ensure that search term and page are not lost on adoption page-turn (#14133) (#14143)
* Fix storage config implementation (#14091) (#14095)
* Fix panic in BasicAuthDecode (#14046) (#14048)
* Always wait for the cmd to finish (#14006) (#14039)
* Don't use simpleMDE editor on mobile devices for 1.13 (#14029)
* Fix incorrect review comment diffs (#14002) (#14011)
* Trim the branch prefix from action.GetBranch (#13981) (#13986)
* Ensure template renderer is available before storage handler (#13164) (#13982)
* Whenever the password is updated ensure that the hash algorithm is too (#13966) (#13967)
* Enforce setting HEAD in wiki to master (#13950) (#13961)
* Fix feishu webhook caused by API changed (#13938)
* Fix Quote Reply button on review diff (#13830) (#13898)
* Fix Pull Merge when tag with same name as base branch exist (#13882) (#13896)
* Fix mermaid chart size (#13865)
* Fix branch/tag notifications in mirror sync (#13855) (#13862)
* Fix crash in short link processor (#13839) (#13841)
* Update font stack to bootstrap's latest (#13834) (#13837)
* Make sure email recipients can see issue (#13820) (#13827)
* Reply button is not removed when deleting a code review comment (#13824)
* When reinitialising DBConfig reset the database use flags (#13796) (#13811)
* ENHANCEMENTS
* Add emoji in label to project boards (#13978) (#14021)
* Send webhook when tag is removed via Web UI (#14015) (#14019)
* Use Process Manager to create own Context (#13792) (#13793)
* API
* GetCombinedCommitStatusByRef always return json & swagger doc fixes (#14047)
* Return original URL of Repositories (#13885) (#13886)
## [1.13.0](https://github.com/go-gitea/gitea/releases/tag/v1.13.0) - 2020-12-01
* SECURITY
* Add Allow-/Block-List for Migrate & Mirrors (#13610) (#13776)
* Prevent git operations for inactive users (#13527) (#13536)
* Disallow urlencoded new lines in git protocol paths if there is a port (#13521) (#13524)
* Mitigate Security vulnerability in the git hook feature (#13058)
* Disable DSA ssh keys by default (#13056)
* Set TLS minimum version to 1.2 (#12689)
* Use argon as default password hash algorithm (#12688)
* BREAKING
* Set RUN_MODE prod by default (#13765) (#13767)
* Don't replace underscores in auto-generated IDs in goldmark (#12805)
* Add Primary Key to Topic and RepoTopic tables (#12639)
* Disable password complexity check default (#12557)
@@ -71,6 +110,40 @@ been added to each release, please refer to the [blog](https://blog.gitea.io).
* Add endpoint for Branch Creation (#11607)
* Add pagination headers on endpoints that support total count from database (#11145)
* BUGFIXES
* Fix bogus http requests on diffs (#13760) (#13761)
* Show 'owner' tag for real owner (#13689) (#13743)
* Validate email before inserting/updating (#13475) (#13666)
* Fix issue/pull request list assignee filter (#13647) (#13651)
* Gitlab migration support for subdirectories (#13563) (#13591)
* Fix logic for preferred license setting (#13550) (#13557)
* Add missed sync branch/tag webhook (#13538) (#13556)
* Migration won't fail on non-migrated reactions (#13507)
* Fix Italian language file parsing error (#13156)
* Show outdated comments in pull request (#13148) (#13162)
* Fix parsing of pre-release git version (#13169) (#13172)
* Fix diff skipping lines (#13154) (#13155)
* When handling errors in storageHandler check underlying error (#13178) (#13193)
* Fix size and clickable area on file table back link (#13205) (#13207)
* Add better error checking for inline html diff code (#13251)
* Fix initial commit page & binary munching problem (#13249) (#13258)
* Fix migrations from remote Gitea instances when configuration not set (#13229) (#13273)
* Store task errors following migrations and display them (#13246) (#13287)
* Fix bug isEnd detection on getIssues/getPullRequests (#13299) (#13301)
* When the git ref is unable to be found return broken pr (#13218) (#13303)
* Ensure topics added using the API are added to the repository (#13285) (#13302)
* Fix avatar autogeneration (#13233) (#13282)
* Add migrated pulls to pull request task queue (#13331) (#13334)
* Issue comment reactions should also check pull type on API (#13349) (#13350)
* Fix links to repositories in /user/setting/repos (#13360) (#13362)
* Remove obsolete change of email on profile page (#13341) (#13347)
* Fix scrolling to resolved comment anchors (#13343) (#13371)
* Storage configuration support `[storage]` (#13314) (#13379)
* When creating line diffs do not split within an html entity (#13357) (#13375) (#13425) (#13427)
* Fix reactions on code comments (#13390) (#13401)
* Add missing full names when DEFAULT_SHOW_FULL_NAME is enabled (#13424)
* Replies to outdated code comments should also be outdated (#13217) (#13433)
* Fix panic bug in handling multiple references in commit (#13486) (#13487)
* Prevent panic on git blame by limiting lines to 4096 bytes at most (#13470) (#13491)
* Show original author's reviews on pull summary box (#13127)
* Update golangci-lint to version 1.31.0 (#13102)
* Fix line break for MS teams webhook (#13081)
@@ -140,6 +213,10 @@ been added to each release, please refer to the [blog](https://blog.gitea.io).
* Fix Enter not working in SimpleMDE (#11564)
* Fix bug about can't skip commits base on base branch (#11555)
* ENHANCEMENTS
* Only Return JSON for responses (#13511) (#13565)
* Use existing analyzer module for language detection for highlighting (#13522) (#13551)
* Return the full rejection message and errors in flash errors (#13221) (#13237)
* Remove PAM from auth dropdown when unavailable (#13276) (#13281)
* Add HostCertificate to sshd_config in Docker image (#13143)
* Save TimeStamps for Star, Label, Follow, Watch and Collaboration to Database (#13124)
* Improve error feedback for duplicate deploy keys (#13112)

View File

@@ -638,8 +638,8 @@ fomantic: $(FOMANTIC_DEST)
$(FOMANTIC_DEST): $(FOMANTIC_CONFIGS) | node_modules
rm -rf $(FOMANTIC_DEST_DIR)
cp web_src/fomantic/theme.config.less node_modules/fomantic-ui/src/theme.config
cp -r web_src/fomantic/_site/* node_modules/fomantic-ui/src/_site/
cp -f web_src/fomantic/theme.config.less node_modules/fomantic-ui/src/theme.config
cp -fr web_src/fomantic/_site/* node_modules/fomantic-ui/src/_site/
npx gulp -f node_modules/fomantic-ui/gulpfile.js build
@touch $(FOMANTIC_DEST)

View File

@@ -283,7 +283,7 @@ func runChangePassword(c *cli.Context) error {
}
user.HashPassword(c.String("password"))
if err := models.UpdateUserCols(user, "passwd", "salt"); err != nil {
if err := models.UpdateUserCols(user, "passwd", "passwd_hash_algo", "salt"); err != nil {
return err
}

View File

@@ -8,8 +8,8 @@
APP_NAME = Gitea: Git with a cup of tea
; Change it if you run locally
RUN_USER = git
; Either "dev", "prod" or "test", default is "dev"
RUN_MODE = dev
; Application run mode, affects performance and debugging. Either "dev", "prod" or "test", default is "prod"
RUN_MODE = prod
[project]
; Default templates for project boards
@@ -1188,6 +1188,14 @@ QUEUE_CONN_STR = "addrs=127.0.0.1:6379 db=0"
MAX_ATTEMPTS = 3
; Backoff time per http/https request retry (seconds)
RETRY_BACKOFF = 3
; Allowed domains for migrating, default is blank. Blank means everything will be allowed.
; Multiple domains could be separated by commas.
ALLOWED_DOMAINS =
; Blocklist for migrating, default is blank. Multiple domains could be separated by commas.
; When ALLOWED_DOMAINS is not blank, this option will be ignored.
BLOCKED_DOMAINS =
; Allow private addresses defined by RFC 1918, RFC 1122, RFC 4632 and RFC 4291 (false by default)
ALLOW_LOCALNETWORKS = false
; default storage for attachments, lfs and avatars
[storage]

View File

@@ -25,7 +25,7 @@ if [ ! -f ${GITEA_CUSTOM}/conf/app.ini ]; then
# Substitude the environment variables in the template
APP_NAME=${APP_NAME:-"Gitea: Git with a cup of tea"} \
RUN_MODE=${RUN_MODE:-"dev"} \
RUN_MODE=${RUN_MODE:-"prod"} \
DOMAIN=${DOMAIN:-"localhost"} \
SSH_DOMAIN=${SSH_DOMAIN:-"localhost"} \
HTTP_PORT=${HTTP_PORT:-"3000"} \

View File

@@ -36,9 +36,7 @@ Values containing `#` or `;` must be quoted using `` ` `` or `"""`.
- `APP_NAME`: **Gitea: Git with a cup of tea**: Application name, used in the page title.
- `RUN_USER`: **git**: The user Gitea will run as. This should be a dedicated system
(non-user) account. Setting this incorrectly will cause Gitea to not start.
- `RUN_MODE`: **dev**: For performance and other purposes, change this to `prod` when
deployed to a production environment. The installation process will set this to `prod`
automatically. \[prod, dev, test\]
- `RUN_MODE`: **prod**: Application run mode, affects performance and debugging. Either "dev", "prod" or "test".
## Repository (`repository`)
@@ -813,6 +811,9 @@ Task queue configuration has been moved to `queue.task`. However, the below conf
- `MAX_ATTEMPTS`: **3**: Max attempts per http/https request on migrations.
- `RETRY_BACKOFF`: **3**: Backoff time per http/https request retry (seconds)
- `ALLOWED_DOMAINS`: **\<empty\>**: Domains allowlist for migrating repositories, default is blank. It means everything will be allowed. Multiple domains could be separated by commas.
- `BLOCKED_DOMAINS`: **\<empty\>**: Domains blocklist for migrating repositories, default is blank. Multiple domains could be separated by commas. When `ALLOWED_DOMAINS` is not blank, this option will be ignored.
- `ALLOW_LOCALNETWORKS`: **false**: Allow private addresses defined by RFC 1918, RFC 1122, RFC 4632 and RFC 4291
## Mirror (`mirror`)

View File

@@ -313,6 +313,9 @@ IS_INPUT_FILE = false
- `MAX_ATTEMPTS`: **3**: 在迁移过程中的 http/https 请求重试次数。
- `RETRY_BACKOFF`: **3**: 等待下一次重试的时间,单位秒。
- `ALLOWED_DOMAINS`: **\<empty\>**: 迁移仓库的域名白名单,默认为空,表示允许从任意域名迁移仓库,多个域名用逗号分隔。
- `BLOCKED_DOMAINS`: **\<empty\>**: 迁移仓库的域名黑名单,默认为空,多个域名用逗号分隔。如果 `ALLOWED_DOMAINS` 不为空,此选项将会被忽略。
- `ALLOW_LOCALNETWORKS`: **false**: Allow private addresses defined by RFC 1918
## LFS (`lfs`)

View File

@@ -30,6 +30,8 @@ All event pushes are POST requests. The methods currently supported are:
### Event information
**WARNING**: The `secret` field in the payload is deprecated as of Gitea 1.13.0 and will be removed in 1.14.0: https://github.com/go-gitea/gitea/issues/11755
The following is an example of event information that will be sent by Gitea to
a Payload URL:

View File

@@ -257,7 +257,7 @@ You can configure some of Gitea's settings via environment variables:
(Default values are provided in **bold**)
* `APP_NAME`: **"Gitea: Git with a cup of tea"**: Application name, used in the page title.
* `RUN_MODE`: **dev**: For performance and other purposes, change this to `prod` when deployed to a production environment.
* `RUN_MODE`: **prod**: Application run mode, affects performance and debugging. Either "dev", "prod" or "test".
* `DOMAIN`: **localhost**: Domain name of this server, used for the displayed http clone URL in Gitea's UI.
* `SSH_DOMAIN`: **localhost**: Domain name of this server, used for the displayed ssh clone URL in Gitea's UI. If the install page is enabled, SSH Domain Server takes DOMAIN value in the form (which overwrite this setting on save).
* `SSH_PORT`: **22**: SSH port displayed in clone URL.

8
go.mod
View File

@@ -104,7 +104,7 @@ require (
github.com/yuin/goldmark-meta v0.0.0-20191126180153-f0638e958b60
go.jolheiser.com/hcaptcha v0.0.4
go.jolheiser.com/pwn v0.0.3
golang.org/x/crypto v0.0.0-20200820211705-5c72a883971a
golang.org/x/crypto v0.0.0-20201217014255-9d1352758620
golang.org/x/net v0.0.0-20200904194848-62affa334b73
golang.org/x/oauth2 v0.0.0-20200107190931-bf48bf16ab8d
golang.org/x/sys v0.0.0-20200918174421-af09f7315aff
@@ -117,10 +117,12 @@ require (
gopkg.in/ini.v1 v1.61.0
gopkg.in/ldap.v3 v3.0.2
gopkg.in/yaml.v2 v2.3.0
mvdan.cc/xurls/v2 v2.1.0
mvdan.cc/xurls/v2 v2.2.0
strk.kbt.io/projects/go/libravatar v0.0.0-20191008002943-06d1c002b251
xorm.io/builder v0.3.7
xorm.io/xorm v1.0.5
)
replace github.com/hashicorp/go-version => github.com/6543/go-version v1.2.3
replace github.com/hashicorp/go-version => github.com/6543/go-version v1.2.4
replace github.com/microcosm-cc/bluemonday => github.com/lunny/bluemonday v1.0.5-0.20201227154428-ca34796141e8

18
go.sum
View File

@@ -48,8 +48,8 @@ gitea.com/macaron/toolbox v0.0.0-20190822013122-05ff0fc766b7 h1:N9QFoeNsUXLhl14m
gitea.com/macaron/toolbox v0.0.0-20190822013122-05ff0fc766b7/go.mod h1:kgsbFPPS4P+acDYDOPDa3N4IWWOuDJt5/INKRUz7aks=
gitea.com/xorm/sqlfiddle v0.0.0-20180821085327-62ce714f951a h1:lSA0F4e9A2NcQSqGqTOXqu2aRi/XEQxDCBwM8yJtE6s=
gitea.com/xorm/sqlfiddle v0.0.0-20180821085327-62ce714f951a/go.mod h1:EXuID2Zs0pAQhH8yz+DNjUbjppKQzKFAn28TMYPB6IU=
github.com/6543/go-version v1.2.3 h1:uF30BawMhoQLzqBeCwhFcWM6HVxlzMHe/zXbzJeKP+o=
github.com/6543/go-version v1.2.3/go.mod h1:fcfWh4zkneEgGXe8JJptiGwp8l6JgJJgS7oTw6P83So=
github.com/6543/go-version v1.2.4 h1:MPsSnqNrM0HwA9tnmWNnsMdQMg4/u4fflARjwomoof4=
github.com/6543/go-version v1.2.4/go.mod h1:oqFAHCwtLVUTLdhQmVZWYvaHXTdsbB4SY85at64SQEo=
github.com/BurntSushi/toml v0.3.1 h1:WXkYYl6Yr3qBf1K79EBnL4mak0OimBfB0XUf9Vl28OQ=
github.com/BurntSushi/toml v0.3.1/go.mod h1:xHWCNGjB5oqiDr8zfno3MHue2Ht5sIBksp03qcyfWMU=
github.com/BurntSushi/xgb v0.0.0-20160522181843-27f122750802/go.mod h1:IVnqGOEym/WlBOVXweHU+Q+/VP0lqqI8lqeDx9IjBqo=
@@ -598,6 +598,8 @@ github.com/lib/pq v1.3.0/go.mod h1:5WUZQaWbwv1U+lTReE5YruASi9Al49XbQIvNi/34Woo=
github.com/lib/pq v1.7.0/go.mod h1:AlVN5x4E4T544tWzH6hKfbfQvm3HdbOxrmggDNAPY9o=
github.com/lib/pq v1.8.1-0.20200908161135-083382b7e6fc h1:ERSU1OvZ6MdWhHieo2oT7xwR/HCksqKdgK6iYPU5pHI=
github.com/lib/pq v1.8.1-0.20200908161135-083382b7e6fc/go.mod h1:AlVN5x4E4T544tWzH6hKfbfQvm3HdbOxrmggDNAPY9o=
github.com/lunny/bluemonday v1.0.5-0.20201227154428-ca34796141e8 h1:1omo92DLtxQu6VwVPSZAmduHaK5zssed6cvkHyl1XOg=
github.com/lunny/bluemonday v1.0.5-0.20201227154428-ca34796141e8/go.mod h1:8iwZnFn2CDDNZ0r6UXhF4xawGvzaqzCRa1n3/lO3W2w=
github.com/lunny/dingtalk_webhook v0.0.0-20171025031554-e3534c89ef96 h1:uNwtsDp7ci48vBTTxDuwcoTXz4lwtDTe7TjCQ0noaWY=
github.com/lunny/dingtalk_webhook v0.0.0-20171025031554-e3534c89ef96/go.mod h1:mmIfjCSQlGYXmJ95jFN84AkQFnVABtKuJL8IrzwvUKQ=
github.com/lunny/log v0.0.0-20160921050905-7887c61bf0de h1:nyxwRdWHAVxpFcDThedEgQ07DbcRc5xgNObtbTp76fk=
@@ -649,8 +651,6 @@ github.com/mgechev/revive v1.0.3-0.20200921231451-246eac737dc7 h1:ydVkpU/M4/c45y
github.com/mgechev/revive v1.0.3-0.20200921231451-246eac737dc7/go.mod h1:no/hfevHbndpXR5CaJahkYCfM/FFpmM/dSOwFGU7Z1o=
github.com/mholt/archiver/v3 v3.3.0 h1:vWjhY8SQp5yzM9P6OJ/eZEkmi3UAbRrxCq48MxjAzig=
github.com/mholt/archiver/v3 v3.3.0/go.mod h1:YnQtqsp+94Rwd0D/rk5cnLrxusUBUXg+08Ebtr1Mqao=
github.com/microcosm-cc/bluemonday v1.0.3-0.20191119130333-0a75d7616912 h1:hJde9rA24hlTcAYSwJoXpDUyGtfKQ/jsofw+WaDqGrI=
github.com/microcosm-cc/bluemonday v1.0.3-0.20191119130333-0a75d7616912/go.mod h1:8iwZnFn2CDDNZ0r6UXhF4xawGvzaqzCRa1n3/lO3W2w=
github.com/miekg/dns v1.0.14/go.mod h1:W1PPwlIAgtquWBMBEV9nkV9Cazfe8ScdGz/Lj7v3Nrg=
github.com/minio/md5-simd v1.1.0 h1:QPfiOqlZH+Cj9teu0t9b1nTBfPbyTl16Of5MeuShdK4=
github.com/minio/md5-simd v1.1.0/go.mod h1:XpBqgZULrMYD3R+M28PcmP0CkI7PEMzB3U77ZrKZ0Gw=
@@ -768,6 +768,7 @@ github.com/rogpeppe/fastuuid v0.0.0-20150106093220-6724a57986af/go.mod h1:XWv6So
github.com/rogpeppe/go-internal v1.1.0/go.mod h1:M8bDsm7K2OlrFYOpmOWEs/qY81heoFRclV5y23lUDJ4=
github.com/rogpeppe/go-internal v1.2.2/go.mod h1:M8bDsm7K2OlrFYOpmOWEs/qY81heoFRclV5y23lUDJ4=
github.com/rogpeppe/go-internal v1.3.0/go.mod h1:M8bDsm7K2OlrFYOpmOWEs/qY81heoFRclV5y23lUDJ4=
github.com/rogpeppe/go-internal v1.5.2/go.mod h1:xXDCJY+GAPziupqXw64V24skbSoqbTEfhy4qGm1nDQc=
github.com/rs/xid v1.2.1 h1:mhH9Nq+C1fY2l1XIpgxIiUOfNpRBYH1kKcr+qfKgjRc=
github.com/rs/xid v1.2.1/go.mod h1:+uKXf+4Djp6Md1KODXJxgGQPKngRmWyn10oCKFzNHOQ=
github.com/rs/zerolog v1.13.0/go.mod h1:YbFCdg8HfsridGWAh22vktObvhZbQsZXe4/zB0OKkWU=
@@ -936,8 +937,9 @@ golang.org/x/crypto v0.0.0-20200302210943-78000ba7a073/go.mod h1:LzIPMQfyMNhhGPh
golang.org/x/crypto v0.0.0-20200323165209-0ec3e9974c59/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=
golang.org/x/crypto v0.0.0-20200622213623-75b288015ac9/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=
golang.org/x/crypto v0.0.0-20200709230013-948cd5f35899/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=
golang.org/x/crypto v0.0.0-20200820211705-5c72a883971a h1:vclmkQCjlDX5OydZ9wv8rBCcS0QyQY66Mpf/7BZbInM=
golang.org/x/crypto v0.0.0-20200820211705-5c72a883971a/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=
golang.org/x/crypto v0.0.0-20201217014255-9d1352758620 h1:3wPMTskHO3+O6jqTEXyFcsnuxMQOqYSaHsDxcbUXpqA=
golang.org/x/crypto v0.0.0-20201217014255-9d1352758620/go.mod h1:jdWPYTVW3xRLrWPugEBEK3UY2ZEsg3UU495nc5E+M+I=
golang.org/x/exp v0.0.0-20190121172915-509febef88a4/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA=
golang.org/x/exp v0.0.0-20190306152737-a1d7652674e8/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA=
golang.org/x/exp v0.0.0-20190510132918-efd6b22b2522/go.mod h1:ZjyILWgesfNpC6sMxTJOJm9Kp84zZh5NQWvqDGG3Qr8=
@@ -1052,6 +1054,8 @@ golang.org/x/sys v0.0.0-20200413165638-669c56c373c4/go.mod h1:h1NjWce9XRLGQEsW7w
golang.org/x/sys v0.0.0-20200625212154-ddb9806d33ae/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200918174421-af09f7315aff h1:1CPUrky56AcgSpxz/KfgzQWzfG09u5YOL8MvPYBlrL8=
golang.org/x/sys v0.0.0-20200918174421-af09f7315aff/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/term v0.0.0-20201117132131-f5c789dd3221 h1:/ZHdbVpdR/jk3g30/d4yUL0JU9kksj8+F/bnQUVLGDM=
golang.org/x/term v0.0.0-20201117132131-f5c789dd3221/go.mod h1:Nr5EML6q2oocZ2LXRh80K7BxOlk5/8JxuGnuhpl+muw=
golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.3.1-0.20180807135948-17ff2d5776d2/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.3.2/go.mod h1:bEr9sfX3Q8Zfm5fL9x+3itogRgK3+ptLWKqgva+5dAk=
@@ -1196,8 +1200,8 @@ honnef.co/go/tools v0.0.0-20190106161140-3f1c8253044a/go.mod h1:rf3lG4BRIbNafJWh
honnef.co/go/tools v0.0.0-20190418001031-e561f6794a2a/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4=
honnef.co/go/tools v0.0.0-20190523083050-ea95bdfd59fc/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4=
honnef.co/go/tools v0.0.1-2019.2.3/go.mod h1:a3bituU0lyd329TUQxRnasdCoJDkEUEAqEt0JzvZhAg=
mvdan.cc/xurls/v2 v2.1.0 h1:KaMb5GLhlcSX+e+qhbRJODnUUBvlw01jt4yrjFIHAuA=
mvdan.cc/xurls/v2 v2.1.0/go.mod h1:5GrSd9rOnKOpZaji1OZLYL/yeAAtGDlo/cFe+8K5n8E=
mvdan.cc/xurls/v2 v2.2.0 h1:NSZPykBXJFCetGZykLAxaL6SIpvbVy/UFEniIfHAa8A=
mvdan.cc/xurls/v2 v2.2.0/go.mod h1:EV1RMtya9D6G5DMYPGD8zTQzaHet6Jh8gFlRgGRJeO8=
rsc.io/binaryregexp v0.2.0/go.mod h1:qTv7/COck+e2FymRvadv62gMdZztPaShugOCi3I+8D8=
strk.kbt.io/projects/go/libravatar v0.0.0-20191008002943-06d1c002b251 h1:mUcz5b3FJbP5Cvdq7Khzn6J9OCUQJaBwgBkCR+MOwSs=
strk.kbt.io/projects/go/libravatar v0.0.0-20191008002943-06d1c002b251/go.mod h1:FJGmPh3vz9jSos1L/F91iAgnC/aejc0wIIrF2ZwJxdY=

View File

@@ -144,3 +144,22 @@ func TestAPIListUsersNonAdmin(t *testing.T) {
req := NewRequestf(t, "GET", "/api/v1/admin/users?token=%s", token)
session.MakeRequest(t, req, http.StatusForbidden)
}
func TestAPICreateUserInvalidEmail(t *testing.T) {
defer prepareTestEnv(t)()
adminUsername := "user1"
session := loginUser(t, adminUsername)
token := getTokenForLoggedInUser(t, session)
urlStr := fmt.Sprintf("/api/v1/admin/users?token=%s", token)
req := NewRequestWithValues(t, "POST", urlStr, map[string]string{
"email": "invalid_email@domain.com\r\n",
"full_name": "invalid user",
"login_name": "invalidUser",
"must_change_password": "true",
"password": "password",
"send_notify": "true",
"source_id": "0",
"username": "invalidUser",
})
session.MakeRequest(t, req, http.StatusUnprocessableEntity)
}

View File

@@ -5,14 +5,17 @@
package integrations
import (
"context"
"encoding/json"
"fmt"
"io/ioutil"
"net/http"
"testing"
"time"
"code.gitea.io/gitea/models"
"code.gitea.io/gitea/modules/auth"
"code.gitea.io/gitea/modules/queue"
api "code.gitea.io/gitea/modules/structs"
"github.com/stretchr/testify/assert"
@@ -225,11 +228,29 @@ func doAPIMergePullRequest(ctx APITestContext, owner, repo string, index int64)
Do: string(models.MergeStyleMerge),
})
if ctx.ExpectedCode != 0 {
ctx.Session.MakeRequest(t, req, ctx.ExpectedCode)
return
resp := ctx.Session.MakeRequest(t, req, NoExpectedStatus)
if resp.Code == http.StatusMethodNotAllowed {
err := api.APIError{}
DecodeJSON(t, resp, &err)
assert.EqualValues(t, "Please try again later", err.Message)
queue.GetManager().FlushAll(context.Background(), 5*time.Second)
req = NewRequestWithJSON(t, http.MethodPost, urlStr, &auth.MergePullRequestForm{
MergeMessageField: "doAPIMergePullRequest Merge",
Do: string(models.MergeStyleMerge),
})
resp = ctx.Session.MakeRequest(t, req, NoExpectedStatus)
}
expected := ctx.ExpectedCode
if expected == 0 {
expected = 200
}
if !assert.EqualValues(t, expected, resp.Code,
"Request: %s %s", req.Method, req.URL.String()) {
logUnexpectedResponse(t, resp)
}
ctx.Session.MakeRequest(t, req, 200)
}
}

View File

@@ -309,6 +309,8 @@ func TestAPIRepoMigrate(t *testing.T) {
{ctxUserID: 2, userID: 1, cloneURL: "https://github.com/go-gitea/test_repo.git", repoName: "git-bad", expectedStatus: http.StatusForbidden},
{ctxUserID: 2, userID: 3, cloneURL: "https://github.com/go-gitea/test_repo.git", repoName: "git-org", expectedStatus: http.StatusCreated},
{ctxUserID: 2, userID: 6, cloneURL: "https://github.com/go-gitea/test_repo.git", repoName: "git-bad-org", expectedStatus: http.StatusForbidden},
{ctxUserID: 2, userID: 3, cloneURL: "https://localhost:3000/user/test_repo.git", repoName: "local-ip", expectedStatus: http.StatusUnprocessableEntity},
{ctxUserID: 2, userID: 3, cloneURL: "https://10.0.0.1/user/test_repo.git", repoName: "private-ip", expectedStatus: http.StatusUnprocessableEntity},
}
defer prepareTestEnv(t)()
@@ -325,8 +327,16 @@ func TestAPIRepoMigrate(t *testing.T) {
if resp.Code == http.StatusUnprocessableEntity {
respJSON := map[string]string{}
DecodeJSON(t, resp, &respJSON)
if assert.Equal(t, respJSON["message"], "Remote visit addressed rate limitation.") {
switch respJSON["message"] {
case "Remote visit addressed rate limitation.":
t.Log("test hit github rate limitation")
case "migrate from '10.0.0.1' is not allowed: the host resolve to a private ip address '10.0.0.1'":
assert.EqualValues(t, "private-ip", testCase.repoName)
case "migrate from 'localhost:3000' is not allowed: the host resolve to a private ip address '::1'",
"migrate from 'localhost:3000' is not allowed: the host resolve to a private ip address '127.0.0.1'":
assert.EqualValues(t, "local-ip", testCase.repoName)
default:
t.Errorf("unexpected error '%v' on url '%s'", respJSON["message"], testCase.cloneURL)
}
} else {
assert.EqualValues(t, testCase.expectedStatus, resp.Code)

View File

@@ -26,7 +26,7 @@ func TestUserHeatmap(t *testing.T) {
var heatmap []*models.UserHeatmapData
DecodeJSON(t, resp, &heatmap)
var dummyheatmap []*models.UserHeatmapData
dummyheatmap = append(dummyheatmap, &models.UserHeatmapData{Timestamp: 1571616000, Contributions: 1})
dummyheatmap = append(dummyheatmap, &models.UserHeatmapData{Timestamp: 1603152000, Contributions: 1})
assert.Equal(t, dummyheatmap, heatmap)
}

View File

@@ -141,7 +141,7 @@ func TestLDAPUserSignin(t *testing.T) {
assert.Equal(t, u.UserName, htmlDoc.GetInputValueByName("name"))
assert.Equal(t, u.FullName, htmlDoc.GetInputValueByName("full_name"))
assert.Equal(t, u.Email, htmlDoc.GetInputValueByName("email"))
assert.Equal(t, u.Email, htmlDoc.Find(`label[for="email"]`).Siblings().First().Text())
}
func TestLDAPUserSync(t *testing.T) {

View File

@@ -111,7 +111,7 @@ func onGiteaRun(t *testing.T, callback func(*testing.T, *url.URL), prepare ...bo
func doGitClone(dstLocalPath string, u *url.URL) func(*testing.T) {
return func(t *testing.T) {
assert.NoError(t, git.CloneWithArgs(u.String(), dstLocalPath, allowLFSFilters(), git.CloneRepoOptions{}))
assert.NoError(t, git.CloneWithArgs(context.Background(), u.String(), dstLocalPath, allowLFSFilters(), git.CloneRepoOptions{}))
assert.True(t, com.IsExist(filepath.Join(dstLocalPath, "README.md")))
}
}

View File

@@ -37,6 +37,13 @@ func (doc *HTMLDoc) GetInputValueByName(name string) string {
return text
}
// Find gets the descendants of each element in the current set of
// matched elements, filtered by a selector. It returns a new Selection
// object containing these matched elements.
func (doc *HTMLDoc) Find(selector string) *goquery.Selection {
return doc.doc.Find(selector)
}
// GetCSRF for get CSRC token value from input
func (doc *HTMLDoc) GetCSRF() string {
return doc.GetInputValueByName("_csrf")

View File

@@ -11,7 +11,6 @@ import (
"encoding/json"
"fmt"
"io"
"log"
"net/http"
"net/http/cookiejar"
"net/http/httptest"
@@ -27,8 +26,10 @@ import (
"code.gitea.io/gitea/models"
"code.gitea.io/gitea/modules/base"
"code.gitea.io/gitea/modules/graceful"
"code.gitea.io/gitea/modules/log"
"code.gitea.io/gitea/modules/queue"
"code.gitea.io/gitea/modules/setting"
"code.gitea.io/gitea/modules/storage"
"code.gitea.io/gitea/modules/util"
"code.gitea.io/gitea/routers"
"code.gitea.io/gitea/routers/routes"
@@ -59,6 +60,8 @@ func NewNilResponseRecorder() *NilResponseRecorder {
}
func TestMain(m *testing.M) {
defer log.Close()
managerCtx, cancel := context.WithCancel(context.Background())
graceful.InitManager(managerCtx)
defer cancel()
@@ -142,6 +145,10 @@ func initIntegrationTest() {
util.RemoveAll(models.LocalCopyPath())
setting.CheckLFSVersion()
setting.InitDBConfig()
if err := storage.Init(); err != nil {
fmt.Printf("Init storage failed: %v", err)
os.Exit(1)
}
switch {
case setting.Database.UseMySQL:
@@ -149,27 +156,27 @@ func initIntegrationTest() {
setting.Database.User, setting.Database.Passwd, setting.Database.Host))
defer db.Close()
if err != nil {
log.Fatalf("sql.Open: %v", err)
log.Fatal("sql.Open: %v", err)
}
if _, err = db.Exec(fmt.Sprintf("CREATE DATABASE IF NOT EXISTS %s", setting.Database.Name)); err != nil {
log.Fatalf("db.Exec: %v", err)
log.Fatal("db.Exec: %v", err)
}
case setting.Database.UsePostgreSQL:
db, err := sql.Open("postgres", fmt.Sprintf("postgres://%s:%s@%s/?sslmode=%s",
setting.Database.User, setting.Database.Passwd, setting.Database.Host, setting.Database.SSLMode))
defer db.Close()
if err != nil {
log.Fatalf("sql.Open: %v", err)
log.Fatal("sql.Open: %v", err)
}
dbrows, err := db.Query(fmt.Sprintf("SELECT 1 FROM pg_database WHERE datname = '%s'", setting.Database.Name))
if err != nil {
log.Fatalf("db.Query: %v", err)
log.Fatal("db.Query: %v", err)
}
defer dbrows.Close()
if !dbrows.Next() {
if _, err = db.Exec(fmt.Sprintf("CREATE DATABASE %s", setting.Database.Name)); err != nil {
log.Fatalf("db.Exec: CREATE DATABASE: %v", err)
log.Fatal("db.Exec: CREATE DATABASE: %v", err)
}
}
// Check if we need to setup a specific schema
@@ -183,18 +190,18 @@ func initIntegrationTest() {
// This is a different db object; requires a different Close()
defer db.Close()
if err != nil {
log.Fatalf("sql.Open: %v", err)
log.Fatal("sql.Open: %v", err)
}
schrows, err := db.Query(fmt.Sprintf("SELECT 1 FROM information_schema.schemata WHERE schema_name = '%s'", setting.Database.Schema))
if err != nil {
log.Fatalf("db.Query: %v", err)
log.Fatal("db.Query: %v", err)
}
defer schrows.Close()
if !schrows.Next() {
// Create and setup a DB schema
if _, err = db.Exec(fmt.Sprintf("CREATE SCHEMA %s", setting.Database.Schema)); err != nil {
log.Fatalf("db.Exec: CREATE SCHEMA: %v", err)
log.Fatal("db.Exec: CREATE SCHEMA: %v", err)
}
}
@@ -203,10 +210,10 @@ func initIntegrationTest() {
db, err := sql.Open("mssql", fmt.Sprintf("server=%s; port=%s; database=%s; user id=%s; password=%s;",
host, port, "master", setting.Database.User, setting.Database.Passwd))
if err != nil {
log.Fatalf("sql.Open: %v", err)
log.Fatal("sql.Open: %v", err)
}
if _, err := db.Exec(fmt.Sprintf("If(db_id(N'%s') IS NULL) BEGIN CREATE DATABASE %s; END;", setting.Database.Name, setting.Database.Name)); err != nil {
log.Fatalf("db.Exec: %v", err)
log.Fatal("db.Exec: %v", err)
}
defer db.Close()
}

View File

@@ -78,6 +78,7 @@ func storeAndGetLfs(t *testing.T, content *[]byte, extraHeader *http.Header, exp
}
}
}
resp := session.MakeRequest(t, req, expectedStatus)
return resp
@@ -210,7 +211,7 @@ func TestGetLFSRange(t *testing.T) {
{"bytes=0-10", "123456789\n", http.StatusPartialContent},
// end-range bigger than length-1 is ignored
{"bytes=0-11", "123456789\n", http.StatusPartialContent},
{"bytes=11-", "", http.StatusPartialContent},
{"bytes=11-", "Requested Range Not Satisfiable", http.StatusRequestedRangeNotSatisfiable},
// incorrect header value cause whole header to be ignored
{"bytes=-", "123456789\n", http.StatusOK},
{"foobar", "123456789\n", http.StatusOK},

View File

@@ -45,19 +45,21 @@ START_SSH_SERVER = true
OFFLINE_MODE = false
LFS_START_SERVER = true
LFS_CONTENT_PATH = integrations/gitea-integration-mysql/datalfs-mysql
LFS_JWT_SECRET = Tv_MjmZuHqpIY6GFl12ebgkRAMt4RlWt0v4EHKSXO0w
LFS_STORE_TYPE = minio
LFS_SERVE_DIRECT = false
LFS_MINIO_ENDPOINT = minio:9000
LFS_MINIO_ACCESS_KEY_ID = 123456
LFS_MINIO_SECRET_ACCESS_KEY = 12345678
LFS_MINIO_BUCKET = gitea
LFS_MINIO_LOCATION = us-east-1
LFS_MINIO_BASE_PATH = lfs/
LFS_MINIO_USE_SSL = false
[lfs]
MINIO_BASE_PATH = lfs/
[attachment]
MINIO_BASE_PATH = attachments/
[avatars]
MINIO_BASE_PATH = avatars/
[repo-avatars]
MINIO_BASE_PATH = repo-avatars/
[storage]
STORAGE_TYPE = minio
SERVE_DIRECT = false
MINIO_ENDPOINT = minio:9000
@@ -65,7 +67,6 @@ MINIO_ACCESS_KEY_ID = 123456
MINIO_SECRET_ACCESS_KEY = 12345678
MINIO_BUCKET = gitea
MINIO_LOCATION = us-east-1
MINIO_BASE_PATH = attachments/
MINIO_USE_SSL = false
[mailer]
@@ -88,9 +89,6 @@ ENABLE_NOTIFY_MAIL = true
DISABLE_GRAVATAR = false
ENABLE_FEDERATED_AVATAR = false
AVATAR_UPLOAD_PATH = integrations/gitea-integration-mysql/data/avatars
REPOSITORY_AVATAR_UPLOAD_PATH = integrations/gitea-integration-mysql/data/repo-avatars
[session]
PROVIDER = file
PROVIDER_CONFIG = integrations/gitea-integration-mysql/data/sessions

View File

@@ -5,10 +5,14 @@
package integrations
import (
"fmt"
"net/http"
"strings"
"testing"
"code.gitea.io/gitea/modules/setting"
"github.com/stretchr/testify/assert"
"github.com/unknwon/i18n"
)
func TestSignup(t *testing.T) {
@@ -28,3 +32,37 @@ func TestSignup(t *testing.T) {
req = NewRequest(t, "GET", "/exampleUser")
MakeRequest(t, req, http.StatusOK)
}
func TestSignupEmail(t *testing.T) {
defer prepareTestEnv(t)()
setting.Service.EnableCaptcha = false
tests := []struct {
email string
wantStatus int
wantMsg string
}{
{"exampleUser@example.com\r\n", http.StatusOK, i18n.Tr("en", "form.email_invalid", nil)},
{"exampleUser@example.com\r", http.StatusOK, i18n.Tr("en", "form.email_invalid", nil)},
{"exampleUser@example.com\n", http.StatusOK, i18n.Tr("en", "form.email_invalid", nil)},
{"exampleUser@example.com", http.StatusFound, ""},
}
for i, test := range tests {
req := NewRequestWithValues(t, "POST", "/user/sign_up", map[string]string{
"user_name": fmt.Sprintf("exampleUser%d", i),
"email": test.email,
"password": "examplePassword!1",
"retype": "examplePassword!1",
})
resp := MakeRequest(t, req, test.wantStatus)
if test.wantMsg != "" {
htmlDoc := NewHTMLParser(t, resp.Body)
assert.Equal(t,
test.wantMsg,
strings.TrimSpace(htmlDoc.doc.Find(".ui.message").Text()),
)
}
}
}

View File

@@ -13,6 +13,7 @@ import (
"time"
"code.gitea.io/gitea/modules/base"
"code.gitea.io/gitea/modules/git"
"code.gitea.io/gitea/modules/log"
"code.gitea.io/gitea/modules/setting"
"code.gitea.io/gitea/modules/timeutil"
@@ -243,7 +244,7 @@ func (a *Action) getCommentLink(e Engine) string {
// GetBranch returns the action's repository branch.
func (a *Action) GetBranch() string {
return a.RefName
return strings.TrimPrefix(a.RefName, git.BranchPrefix)
}
// GetContent returns the action's content.

View File

@@ -193,6 +193,21 @@ func (err ErrEmailAlreadyUsed) Error() string {
return fmt.Sprintf("e-mail already in use [email: %s]", err.Email)
}
// ErrEmailInvalid represents an error where the email address does not comply with RFC 5322
type ErrEmailInvalid struct {
Email string
}
// IsErrEmailInvalid checks if an error is an ErrEmailInvalid
func IsErrEmailInvalid(err error) bool {
_, ok := err.(ErrEmailInvalid)
return ok
}
func (err ErrEmailInvalid) Error() string {
return fmt.Sprintf("e-mail invalid [email: %s]", err.Email)
}
// ErrOpenIDAlreadyUsed represents a "OpenIDAlreadyUsed" kind of error.
type ErrOpenIDAlreadyUsed struct {
OpenID string
@@ -1004,6 +1019,29 @@ func IsErrWontSign(err error) bool {
return ok
}
// ErrMigrationNotAllowed explains why a migration from an url is not allowed
type ErrMigrationNotAllowed struct {
Host string
NotResolvedIP bool
PrivateNet string
}
func (e *ErrMigrationNotAllowed) Error() string {
if e.NotResolvedIP {
return fmt.Sprintf("migrate from '%s' is not allowed: unknown hostname", e.Host)
}
if len(e.PrivateNet) != 0 {
return fmt.Sprintf("migrate from '%s' is not allowed: the host resolve to a private ip address '%s'", e.Host, e.PrivateNet)
}
return fmt.Sprintf("migrate from '%s is not allowed'", e.Host)
}
// IsErrMigrationNotAllowed checks if an error is a ErrMigrationNotAllowed
func IsErrMigrationNotAllowed(err error) bool {
_, ok := err.(*ErrMigrationNotAllowed)
return ok
}
// __________ .__
// \______ \____________ ____ ____ | |__
// | | _/\_ __ \__ \ / \_/ ___\| | \
@@ -2003,7 +2041,7 @@ type ErrNotValidReviewRequest struct {
// IsErrNotValidReviewRequest checks if an error is a ErrNotValidReviewRequest.
func IsErrNotValidReviewRequest(err error) bool {
_, ok := err.(ErrReviewNotExist)
_, ok := err.(ErrNotValidReviewRequest)
return ok
}

View File

@@ -5,7 +5,7 @@
act_user_id: 2
repo_id: 2
is_private: true
created_unix: 1571686356
created_unix: 1603228283
-
id: 2

View File

@@ -725,6 +725,7 @@ func createComment(e *xorm.Session, opts *CreateCommentOptions) (_ *Comment, err
RefAction: opts.RefAction,
RefIsPull: opts.RefIsPull,
IsForcePush: opts.IsForcePush,
Invalidated: opts.Invalidated,
}
if _, err = e.Insert(comment); err != nil {
return nil, err
@@ -891,6 +892,7 @@ type CreateCommentOptions struct {
RefAction references.XRefAction
RefIsPull bool
IsForcePush bool
Invalidated bool
}
// CreateComment creates comment of issue or commit.
@@ -966,6 +968,8 @@ type FindCommentsOptions struct {
ReviewID int64
Since int64
Before int64
Line int64
TreePath string
Type CommentType
}
@@ -989,6 +993,12 @@ func (opts *FindCommentsOptions) toConds() builder.Cond {
if opts.Type != CommentTypeUnknown {
cond = cond.And(builder.Eq{"comment.type": opts.Type})
}
if opts.Line > 0 {
cond = cond.And(builder.Eq{"comment.line": opts.Line})
}
if len(opts.TreePath) > 0 {
cond = cond.And(builder.Eq{"comment.tree_path": opts.TreePath})
}
return cond
}
@@ -1003,6 +1013,8 @@ func findComments(e Engine, opts FindCommentsOptions) ([]*Comment, error) {
sess = opts.setSessionPagination(sess)
}
// WARNING: If you change this order you will need to fix createCodeComment
return comments, sess.
Asc("comment.created_unix").
Asc("comment.id").
@@ -1124,6 +1136,10 @@ func fetchCodeCommentsByReview(e Engine, issue *Issue, currentUser *User, review
return nil, err
}
if err := comment.LoadReactions(issue.Repo); err != nil {
return nil, err
}
if re, ok := reviews[comment.ReviewID]; ok && re != nil {
// If the review is pending only the author can see the comments (except the review is set)
if review.ID == 0 {

View File

@@ -119,8 +119,18 @@ func InitOAuth2() error {
if err := oauth2.Init(x); err != nil {
return err
}
loginSources, _ := GetActiveOAuth2ProviderLoginSources()
return initOAuth2LoginSources()
}
// ResetOAuth2 clears existing OAuth2 providers and loads them from DB
func ResetOAuth2() error {
oauth2.ClearProviders()
return initOAuth2LoginSources()
}
// initOAuth2LoginSources is used to load and register all active OAuth2 providers
func initOAuth2LoginSources() error {
loginSources, _ := GetActiveOAuth2ProviderLoginSources()
for _, source := range loginSources {
oAuth2Config := source.OAuth2()
err := oauth2.RegisterProvider(source.Name, oAuth2Config.Provider, oAuth2Config.ClientID, oAuth2Config.ClientSecret, oAuth2Config.OpenIDConnectAutoDiscoveryURL, oAuth2Config.CustomURLMapping)

View File

@@ -426,6 +426,7 @@ func (repo *Repository) innerAPIFormat(e Engine, mode AccessMode, isParent bool)
HTMLURL: repo.HTMLURL(),
SSHURL: cloneLink.SSH,
CloneURL: cloneLink.HTTPS,
OriginalURL: repo.SanitizedOriginalURL(),
Website: repo.Website,
Stars: repo.NumStars,
Forks: repo.NumForks,

View File

@@ -271,6 +271,27 @@ func getUserRepoPermission(e Engine, repo *Repository, user *User) (perm Permiss
return
}
// IsUserRealRepoAdmin check if this user is real repo admin
func IsUserRealRepoAdmin(repo *Repository, user *User) (bool, error) {
if repo.OwnerID == user.ID {
return true, nil
}
sess := x.NewSession()
defer sess.Close()
if err := repo.getOwner(sess); err != nil {
return false, err
}
accessMode, err := accessLevel(sess, user, repo)
if err != nil {
return false, err
}
return accessMode >= AccessModeAdmin, nil
}
// IsUserRepoAdmin return true if user has admin right of a repo
func IsUserRepoAdmin(repo *Repository, user *User) (bool, error) {
return isUserRepoAdmin(x, repo, user)

View File

@@ -147,6 +147,27 @@ func GetMigratingTask(repoID int64) (*Task, error) {
return &task, nil
}
// GetMigratingTaskByID returns the migrating task by repo's id
func GetMigratingTaskByID(id, doerID int64) (*Task, *migration.MigrateOptions, error) {
var task = Task{
ID: id,
DoerID: doerID,
Type: structs.TaskTypeMigrateRepo,
}
has, err := x.Get(&task)
if err != nil {
return nil, nil, err
} else if !has {
return nil, nil, ErrTaskDoesNotExist{id, 0, task.Type}
}
var opts migration.MigrateOptions
if err := json.Unmarshal([]byte(task.PayloadContent), &opts); err != nil {
return nil, nil, err
}
return &task, &opts, nil
}
// FindTaskOptions find all tasks
type FindTaskOptions struct {
Status int

View File

@@ -197,10 +197,13 @@ func FindTopics(opts *FindTopicOptions) (topics []*Topic, err error) {
// GetRepoTopicByName retrives topic from name for a repo if it exist
func GetRepoTopicByName(repoID int64, topicName string) (*Topic, error) {
return getRepoTopicByName(x, repoID, topicName)
}
func getRepoTopicByName(e Engine, repoID int64, topicName string) (*Topic, error) {
var cond = builder.NewCond()
var topic Topic
cond = cond.And(builder.Eq{"repo_topic.repo_id": repoID}).And(builder.Eq{"topic.name": topicName})
sess := x.Table("topic").Where(cond)
sess := e.Table("topic").Where(cond)
sess.Join("INNER", "repo_topic", "repo_topic.topic_id = topic.id")
has, err := sess.Get(&topic)
if has {
@@ -211,7 +214,13 @@ func GetRepoTopicByName(repoID int64, topicName string) (*Topic, error) {
// AddTopic adds a topic name to a repository (if it does not already have it)
func AddTopic(repoID int64, topicName string) (*Topic, error) {
topic, err := GetRepoTopicByName(repoID, topicName)
sess := x.NewSession()
defer sess.Close()
if err := sess.Begin(); err != nil {
return nil, err
}
topic, err := getRepoTopicByName(sess, repoID, topicName)
if err != nil {
return nil, err
}
@@ -220,7 +229,25 @@ func AddTopic(repoID int64, topicName string) (*Topic, error) {
return topic, nil
}
return addTopicByNameToRepo(x, repoID, topicName)
topic, err = addTopicByNameToRepo(sess, repoID, topicName)
if err != nil {
return nil, err
}
topicNames := make([]string, 0, 25)
if err := sess.Select("name").Table("topic").
Join("INNER", "repo_topic", "repo_topic.topic_id = topic.id").
Where("repo_topic.repo_id = ?", repoID).Desc("topic.repo_count").Find(&topicNames); err != nil {
return nil, err
}
if _, err := sess.ID(repoID).Cols("topics").Update(&Repository{
Topics: topicNames,
}); err != nil {
return nil, err
}
return topic, sess.Commit()
}
// DeleteTopic removes a topic name from a repository (if it has it)

View File

@@ -191,9 +191,6 @@ func (u *User) BeforeUpdate() {
if len(u.AvatarEmail) == 0 {
u.AvatarEmail = u.Email
}
if len(u.AvatarEmail) > 0 && u.Avatar == "" {
u.Avatar = base.HashEmail(u.AvatarEmail)
}
}
u.LowerName = strings.ToLower(u.Name)
@@ -554,6 +551,7 @@ func (u *User) GetOwnedOrganizations() (err error) {
}
// GetOrganizations returns paginated organizations that user belongs to.
// TODO: does not respect All and show orgs you privately participate
func (u *User) GetOrganizations(opts *SearchOrganizationsOptions) error {
sess := x.NewSession()
defer sess.Close()
@@ -824,6 +822,10 @@ func CreateUser(u *User) (err error) {
return ErrEmailAlreadyUsed{u.Email}
}
if err = ValidateEmail(u.Email); err != nil {
return err
}
isExist, err = isEmailUsed(sess, u.Email)
if err != nil {
return err
@@ -835,7 +837,6 @@ func CreateUser(u *User) (err error) {
u.LowerName = strings.ToLower(u.Name)
u.AvatarEmail = u.Email
u.Avatar = base.HashEmail(u.AvatarEmail)
if u.Rands, err = GetUserSalt(); err != nil {
return err
}
@@ -967,8 +968,12 @@ func checkDupEmail(e Engine, u *User) error {
return nil
}
func updateUser(e Engine, u *User) error {
_, err := e.ID(u.ID).AllCols().Update(u)
func updateUser(e Engine, u *User) (err error) {
u.Email = strings.ToLower(u.Email)
if err = ValidateEmail(u.Email); err != nil {
return err
}
_, err = e.ID(u.ID).AllCols().Update(u)
return err
}
@@ -988,13 +993,21 @@ func updateUserCols(e Engine, u *User, cols ...string) error {
}
// UpdateUserSetting updates user's settings.
func UpdateUserSetting(u *User) error {
func UpdateUserSetting(u *User) (err error) {
sess := x.NewSession()
defer sess.Close()
if err = sess.Begin(); err != nil {
return err
}
if !u.IsOrganization() {
if err := checkDupEmail(x, u); err != nil {
if err = checkDupEmail(sess, u); err != nil {
return err
}
}
return updateUser(x, u)
if err = updateUser(sess, u); err != nil {
return err
}
return sess.Commit()
}
// deleteBeans deletes all given beans, beans should contain delete conditions.

View File

@@ -39,10 +39,9 @@ func (u *User) generateRandomAvatar(e Engine) error {
if err != nil {
return fmt.Errorf("RandomImage: %v", err)
}
// NOTICE for random avatar, it still uses id as avatar name, but custom avatar use md5
// since random image is not a user's photo, there is no security for enumable
if u.Avatar == "" {
u.Avatar = fmt.Sprintf("%d", u.ID)
u.Avatar = base.HashEmail(u.AvatarEmail)
}
if err := storage.SaveFrom(storage.Avatars, u.CustomAvatarRelativePath(), func(w io.Writer) error {

View File

@@ -17,7 +17,7 @@ func TestGetUserHeatmapDataByUser(t *testing.T) {
CountResult int
JSONResult string
}{
{2, 1, `[{"timestamp":1571616000,"contributions":1}]`},
{2, 1, `[{"timestamp":1603152000,"contributions":1}]`},
{3, 0, `[]`},
}
// Prepare

View File

@@ -8,6 +8,7 @@ package models
import (
"errors"
"fmt"
"net/mail"
"strings"
"code.gitea.io/gitea/modules/log"
@@ -32,6 +33,19 @@ type EmailAddress struct {
IsPrimary bool `xorm:"-"`
}
// ValidateEmail check if email is a allowed address
func ValidateEmail(email string) error {
if len(email) == 0 {
return nil
}
if _, err := mail.ParseAddress(email); err != nil {
return ErrEmailInvalid{email}
}
return nil
}
// GetEmailAddresses returns all email addresses belongs to given user.
func GetEmailAddresses(uid int64) ([]*EmailAddress, error) {
emails := make([]*EmailAddress, 0, 5)
@@ -143,6 +157,10 @@ func addEmailAddress(e Engine, email *EmailAddress) error {
return ErrEmailAlreadyUsed{email.Email}
}
if err = ValidateEmail(email.Email); err != nil {
return err
}
_, err = e.Insert(email)
return err
}
@@ -167,6 +185,9 @@ func AddEmailAddresses(emails []*EmailAddress) error {
} else if used {
return ErrEmailAlreadyUsed{emails[i].Email}
}
if err = ValidateEmail(emails[i].Email); err != nil {
return err
}
}
if _, err := x.Insert(emails); err != nil {

View File

@@ -346,6 +346,21 @@ func TestCreateUser(t *testing.T) {
assert.NoError(t, DeleteUser(user))
}
func TestCreateUserInvalidEmail(t *testing.T) {
user := &User{
Name: "GiteaBot",
Email: "GiteaBot@gitea.io\r\n",
Passwd: ";p['////..-++']",
IsAdmin: false,
Theme: setting.UI.DefaultTheme,
MustChangePassword: false,
}
err := CreateUser(user)
assert.Error(t, err)
assert.True(t, IsErrEmailInvalid(err))
}
func TestCreateUser_Issue5882(t *testing.T) {
// Init settings

View File

@@ -118,6 +118,11 @@ func RemoveProvider(providerName string) {
delete(goth.GetProviders(), providerName)
}
// ClearProviders clears all OAuth2 providers from the goth lib
func ClearProviders() {
goth.ClearProviders()
}
// used to create different types of goth providers
func createProvider(providerName, providerType, clientID, clientSecret, openIDConnectAutoDiscoveryURL string, customURLMapping *CustomURLMapping) (goth.Provider, error) {
callbackURL := setting.AppURL + "user/oauth2/" + url.PathEscape(providerName) + "/callback"

View File

@@ -12,6 +12,9 @@ import (
"github.com/msteinert/pam"
)
// Supported is true when built with PAM
var Supported = true
// Auth pam auth service
func Auth(serviceName, userName, passwd string) (string, error) {
t, err := pam.StartFunc(serviceName, userName, func(s pam.Style, msg string) (string, error) {

View File

@@ -10,6 +10,9 @@ import (
"errors"
)
// Supported is false when built without PAM
var Supported = false
// Auth not supported lack of pam tag
func Auth(serviceName, userName, passwd string) (string, error) {
return "", errors.New("PAM not supported")

View File

@@ -102,6 +102,9 @@ func ParseRemoteAddr(remoteAddr, authUsername, authPassword string, user *models
u.User = url.UserPassword(authUsername, authPassword)
}
remoteAddr = u.String()
if u.Scheme == "git" && u.Port() != "" && (strings.Contains(remoteAddr, "%0d") || strings.Contains(remoteAddr, "%0a")) {
return "", models.ErrInvalidCloneAddr{IsURLError: true}
}
} else if !user.CanImportLocal() {
return "", models.ErrInvalidCloneAddr{IsPermissionDenied: true}
} else if !com.IsDir(remoteAddr) {

View File

@@ -199,7 +199,6 @@ func (f *AccessTokenForm) Validate(ctx *macaron.Context, errs binding.Errors) bi
type UpdateProfileForm struct {
Name string `binding:"AlphaDashDot;MaxSize(40)"`
FullName string `binding:"MaxSize(100)"`
Email string `binding:"Required;Email;MaxSize(254)"`
KeepEmailPrivate bool
Website string `binding:"ValidUrl;MaxSize(255)"`
Location string `binding:"MaxSize(50)"`

View File

@@ -10,6 +10,7 @@ import (
"crypto/sha256"
"encoding/base64"
"encoding/hex"
"errors"
"fmt"
"net/http"
"net/url"
@@ -65,6 +66,11 @@ func BasicAuthDecode(encoded string) (string, string, error) {
}
auth := strings.SplitN(string(s), ":", 2)
if len(auth) != 2 {
return "", "", errors.New("invalid basic authentication")
}
return auth[0], auth[1], nil
}

View File

@@ -46,6 +46,12 @@ func TestBasicAuthDecode(t *testing.T) {
assert.NoError(t, err)
assert.Equal(t, "foo", user)
assert.Equal(t, "bar", pass)
_, _, err = BasicAuthDecode("aW52YWxpZA==")
assert.Error(t, err)
_, _, err = BasicAuthDecode("invalid")
assert.Error(t, err)
}
func TestBasicAuthEncode(t *testing.T) {

View File

@@ -255,3 +255,61 @@ func (ctx *APIContext) NotFound(objs ...interface{}) {
"errors": errors,
})
}
// RepoRefForAPI handles repository reference names when the ref name is not explicitly given
func RepoRefForAPI() macaron.Handler {
return func(ctx *APIContext) {
// Empty repository does not have reference information.
if ctx.Repo.Repository.IsEmpty {
return
}
var err error
if ctx.Repo.GitRepo == nil {
repoPath := models.RepoPath(ctx.Repo.Owner.Name, ctx.Repo.Repository.Name)
ctx.Repo.GitRepo, err = git.OpenRepository(repoPath)
if err != nil {
ctx.InternalServerError(err)
return
}
// We opened it, we should close it
defer func() {
// If it's been set to nil then assume someone else has closed it.
if ctx.Repo.GitRepo != nil {
ctx.Repo.GitRepo.Close()
}
}()
}
refName := getRefName(ctx.Context, RepoRefAny)
if ctx.Repo.GitRepo.IsBranchExist(refName) {
ctx.Repo.Commit, err = ctx.Repo.GitRepo.GetBranchCommit(refName)
if err != nil {
ctx.InternalServerError(err)
return
}
ctx.Repo.CommitID = ctx.Repo.Commit.ID.String()
} else if ctx.Repo.GitRepo.IsTagExist(refName) {
ctx.Repo.Commit, err = ctx.Repo.GitRepo.GetTagCommit(refName)
if err != nil {
ctx.InternalServerError(err)
return
}
ctx.Repo.CommitID = ctx.Repo.Commit.ID.String()
} else if len(refName) == 40 {
ctx.Repo.CommitID = refName
ctx.Repo.Commit, err = ctx.Repo.GitRepo.GetCommit(refName)
if err != nil {
ctx.NotFound("GetCommit", err)
return
}
} else {
ctx.NotFound(fmt.Errorf("not exist: '%s'", ctx.Params("*")))
return
}
ctx.Next()
}
}

View File

@@ -704,7 +704,6 @@ func RepoRefByType(refType RepoRefType) macaron.Handler {
err error
)
// For API calls.
if ctx.Repo.GitRepo == nil {
repoPath := models.RepoPath(ctx.Repo.Owner.Name, ctx.Repo.Repository.Name)
ctx.Repo.GitRepo, err = git.OpenRepository(repoPath)
@@ -773,7 +772,7 @@ func RepoRefByType(refType RepoRefType) macaron.Handler {
ctx.Repo.Commit, err = ctx.Repo.GitRepo.GetCommit(refName)
if err != nil {
ctx.NotFound("GetCommit", nil)
ctx.NotFound("GetCommit", err)
return
}
} else {

View File

@@ -27,7 +27,7 @@ type BlameReader struct {
cmd *exec.Cmd
pid int64
output io.ReadCloser
scanner *bufio.Scanner
reader *bufio.Reader
lastSha *string
cancel context.CancelFunc
}
@@ -38,23 +38,30 @@ var shaLineRegex = regexp.MustCompile("^([a-z0-9]{40})")
func (r *BlameReader) NextPart() (*BlamePart, error) {
var blamePart *BlamePart
scanner := r.scanner
reader := r.reader
if r.lastSha != nil {
blamePart = &BlamePart{*r.lastSha, make([]string, 0)}
}
for scanner.Scan() {
line := scanner.Text()
var line []byte
var isPrefix bool
var err error
for err != io.EOF {
line, isPrefix, err = reader.ReadLine()
if err != nil && err != io.EOF {
return blamePart, err
}
// Skip empty lines
if len(line) == 0 {
// isPrefix will be false
continue
}
lines := shaLineRegex.FindStringSubmatch(line)
lines := shaLineRegex.FindSubmatch(line)
if lines != nil {
sha1 := lines[1]
sha1 := string(lines[1])
if blamePart == nil {
blamePart = &BlamePart{sha1, make([]string, 0)}
@@ -62,12 +69,27 @@ func (r *BlameReader) NextPart() (*BlamePart, error) {
if blamePart.Sha != sha1 {
r.lastSha = &sha1
// need to munch to end of line...
for isPrefix {
_, isPrefix, err = reader.ReadLine()
if err != nil && err != io.EOF {
return blamePart, err
}
}
return blamePart, nil
}
} else if line[0] == '\t' {
code := line[1:]
blamePart.Lines = append(blamePart.Lines, code)
blamePart.Lines = append(blamePart.Lines, string(code))
}
// need to munch to end of line...
for isPrefix {
_, isPrefix, err = reader.ReadLine()
if err != nil && err != io.EOF {
return blamePart, err
}
}
}
@@ -121,13 +143,13 @@ func createBlameReader(ctx context.Context, dir string, command ...string) (*Bla
pid := process.GetManager().Add(fmt.Sprintf("GetBlame [repo_path: %s]", dir), cancel)
scanner := bufio.NewScanner(stdout)
reader := bufio.NewReader(stdout)
return &BlameReader{
cmd,
pid,
stdout,
scanner,
reader,
nil,
cancel,
}, nil

View File

@@ -153,6 +153,7 @@ func (c *Command) RunInDirTimeoutEnvFullPipelineFunc(env []string, timeout time.
err := fn(ctx, cancel)
if err != nil {
cancel()
_ = cmd.Wait()
return err
}
}

View File

@@ -32,6 +32,7 @@ var (
GitExecutable = "git"
// DefaultContext is the default context to run git commands in
// will be overwritten by Init with HammerContext
DefaultContext = context.Background()
gitVersion *version.Version

View File

@@ -8,6 +8,7 @@ package git
import (
"bytes"
"container/list"
"context"
"errors"
"fmt"
"os"
@@ -166,19 +167,24 @@ type CloneRepoOptions struct {
// Clone clones original repository to target path.
func Clone(from, to string, opts CloneRepoOptions) (err error) {
return CloneWithContext(DefaultContext, from, to, opts)
}
// CloneWithContext clones original repository to target path.
func CloneWithContext(ctx context.Context, from, to string, opts CloneRepoOptions) (err error) {
cargs := make([]string, len(GlobalCommandArgs))
copy(cargs, GlobalCommandArgs)
return CloneWithArgs(from, to, cargs, opts)
return CloneWithArgs(ctx, from, to, cargs, opts)
}
// CloneWithArgs original repository to target path.
func CloneWithArgs(from, to string, args []string, opts CloneRepoOptions) (err error) {
func CloneWithArgs(ctx context.Context, from, to string, args []string, opts CloneRepoOptions) (err error) {
toDir := path.Dir(to)
if err = os.MkdirAll(toDir, os.ModePerm); err != nil {
return err
}
cmd := NewCommandNoGlobals(args...).AddArguments("clone")
cmd := NewCommandContextNoGlobals(ctx, args...).AddArguments("clone")
if opts.Mirror {
cmd.AddArguments("--mirror")
}

View File

@@ -13,6 +13,7 @@ import (
"strings"
"sync"
"code.gitea.io/gitea/modules/analyze"
"code.gitea.io/gitea/modules/log"
"code.gitea.io/gitea/modules/setting"
"github.com/alecthomas/chroma/formatters/html"
@@ -117,9 +118,11 @@ func File(numLines int, fileName string, code []byte) map[int]string {
fileName = "test." + val
}
lexer := lexers.Match(fileName)
language := analyze.GetCodeLanguage(fileName, code)
lexer := lexers.Get(language)
if lexer == nil {
lexer = lexers.Analyse(string(code))
lexer = lexers.Match(fileName)
if lexer == nil {
lexer = lexers.Fallback
}

View File

@@ -8,6 +8,7 @@ import (
"crypto/sha256"
"encoding/hex"
"errors"
"fmt"
"io"
"os"
@@ -21,6 +22,21 @@ var (
errSizeMismatch = errors.New("Content size does not match")
)
// ErrRangeNotSatisfiable represents an error which request range is not satisfiable.
type ErrRangeNotSatisfiable struct {
FromByte int64
}
func (err ErrRangeNotSatisfiable) Error() string {
return fmt.Sprintf("Requested range %d is not satisfiable", err.FromByte)
}
// IsErrRangeNotSatisfiable returns true if the error is an ErrRangeNotSatisfiable
func IsErrRangeNotSatisfiable(err error) bool {
_, ok := err.(ErrRangeNotSatisfiable)
return ok
}
// ContentStore provides a simple file system based storage.
type ContentStore struct {
storage.ObjectStorage
@@ -35,7 +51,12 @@ func (s *ContentStore) Get(meta *models.LFSMetaObject, fromByte int64) (io.ReadC
return nil, err
}
if fromByte > 0 {
_, err = f.Seek(fromByte, os.SEEK_CUR)
if fromByte >= meta.Size {
return nil, ErrRangeNotSatisfiable{
FromByte: fromByte,
}
}
_, err = f.Seek(fromByte, io.SeekStart)
if err != nil {
log.Error("Whilst trying to read LFS OID[%s]: Unable to seek to %d Error: %v", meta.Oid, fromByte, err)
}

View File

@@ -191,8 +191,12 @@ func getContentHandler(ctx *context.Context) {
contentStore := &ContentStore{ObjectStorage: storage.LFS}
content, err := contentStore.Get(meta, fromByte)
if err != nil {
// Errors are logged in contentStore.Get
writeStatus(ctx, 404)
if IsErrRangeNotSatisfiable(err) {
writeStatus(ctx, http.StatusRequestedRangeNotSatisfiable)
} else {
// Errors are logged in contentStore.Get
writeStatus(ctx, 404)
}
return
}
defer content.Close()

View File

@@ -632,16 +632,18 @@ func shortLinkProcessorFull(ctx *postProcessCtx, node *html.Node, noLink bool) {
// When parsing HTML, x/net/html will change all quotes which are
// not used for syntax into UTF-8 quotes. So checking val[0] won't
// be enough, since that only checks a single byte.
if (strings.HasPrefix(val, "“") && strings.HasSuffix(val, "”")) ||
(strings.HasPrefix(val, "") && strings.HasSuffix(val, "")) {
const lenQuote = len("")
val = val[lenQuote : len(val)-lenQuote]
} else if (strings.HasPrefix(val, "\"") && strings.HasSuffix(val, "\"")) ||
(strings.HasPrefix(val, "'") && strings.HasSuffix(val, "'")) {
val = val[1 : len(val)-1]
} else if strings.HasPrefix(val, "'") && strings.HasSuffix(val, "") {
const lenQuote = len("")
val = val[1 : len(val)-lenQuote]
if len(val) > 1 {
if (strings.HasPrefix(val, "") && strings.HasSuffix(val, "")) ||
(strings.HasPrefix(val, "") && strings.HasSuffix(val, "")) {
const lenQuote = len("")
val = val[lenQuote : len(val)-lenQuote]
} else if (strings.HasPrefix(val, "\"") && strings.HasSuffix(val, "\"")) ||
(strings.HasPrefix(val, "'") && strings.HasSuffix(val, "'")) {
val = val[1 : len(val)-1]
} else if strings.HasPrefix(val, "'") && strings.HasSuffix(val, "") {
const lenQuote = len("")
val = val[1 : len(val)-lenQuote]
}
}
props[key] = val
}

View File

@@ -142,7 +142,7 @@ func TestRender_links(t *testing.T) {
`<p><a href="ftp://gitea.com/file.txt" rel="nofollow">ftp://gitea.com/file.txt</a></p>`)
test(
"magnet:?xt=urn:btih:5dee65101db281ac9c46344cd6b175cdcadabcde&dn=download",
`<p><a href="magnet:?dn=download&xt=urn%3Abtih%3A5dee65101db281ac9c46344cd6b175cdcadabcde" rel="nofollow">magnet:?xt=urn:btih:5dee65101db281ac9c46344cd6b175cdcadabcde&amp;dn=download</a></p>`)
`<p><a href="magnet:?xt=urn%3Abtih%3A5dee65101db281ac9c46344cd6b175cdcadabcde&dn=download" rel="nofollow">magnet:?xt=urn:btih:5dee65101db281ac9c46344cd6b175cdcadabcde&amp;dn=download</a></p>`)
// Test that should *not* be turned into URL
test(

View File

@@ -0,0 +1,46 @@
// Copyright 2019 The Gitea Authors. All rights reserved.
// Use of this source code is governed by a MIT-style
// license that can be found in the LICENSE file.
package matchlist
import (
"strings"
"github.com/gobwas/glob"
)
// Matchlist represents a block or allow list
type Matchlist struct {
ruleGlobs []glob.Glob
}
// NewMatchlist creates a new block or allow list
func NewMatchlist(rules ...string) (*Matchlist, error) {
for i := range rules {
rules[i] = strings.ToLower(rules[i])
}
list := Matchlist{
ruleGlobs: make([]glob.Glob, 0, len(rules)),
}
for _, rule := range rules {
rg, err := glob.Compile(rule)
if err != nil {
return nil, err
}
list.ruleGlobs = append(list.ruleGlobs, rg)
}
return &list, nil
}
// Match will matches
func (b *Matchlist) Match(u string) bool {
for _, r := range b.ruleGlobs {
if r.Match(u) {
return true
}
}
return false
}

View File

@@ -14,6 +14,7 @@ import (
"strings"
"time"
"code.gitea.io/gitea/models"
"code.gitea.io/gitea/modules/log"
"code.gitea.io/gitea/modules/migrations/base"
"code.gitea.io/gitea/modules/structs"
@@ -47,7 +48,7 @@ func (f *GiteaDownloaderFactory) New(ctx context.Context, opts base.MigrateOptio
path := strings.Split(repoNameSpace, "/")
if len(path) < 2 {
return nil, fmt.Errorf("invalid path")
return nil, fmt.Errorf("invalid path: %s", repoNameSpace)
}
repoPath := strings.Join(path[len(path)-2:], "/")
@@ -87,7 +88,7 @@ func NewGiteaDownloader(ctx context.Context, baseURL, repoPath, username, passwo
gitea_sdk.SetContext(ctx),
)
if err != nil {
log.Error(fmt.Sprintf("NewGiteaDownloader: %s", err.Error()))
log.Error(fmt.Sprintf("Failed to create NewGiteaDownloader for: %s. Error: %v", baseURL, err))
return nil, err
}
@@ -101,12 +102,13 @@ func NewGiteaDownloader(ctx context.Context, baseURL, repoPath, username, passwo
// set small maxPerPage since we can only guess
// (default would be 50 but this can differ)
maxPerPage := 10
// new gitea instances can tell us what maximum they have
if giteaClient.CheckServerVersionConstraint(">=1.13.0") == nil {
apiConf, _, err := giteaClient.GetGlobalAPISettings()
if err != nil {
return nil, err
}
// gitea instances >=1.13 can tell us what maximum they have
apiConf, _, err := giteaClient.GetGlobalAPISettings()
if err != nil {
log.Info("Unable to get global API settings. Ignoring these.")
log.Debug("giteaClient.GetGlobalAPISettings. Error: %v", err)
}
if apiConf != nil {
maxPerPage = apiConf.MaxResponseItems
}
@@ -324,45 +326,44 @@ func (g *GiteaDownloader) GetAsset(_ string, relID, id int64) (io.ReadCloser, er
}
func (g *GiteaDownloader) getIssueReactions(index int64) ([]*base.Reaction, error) {
var reactions []*base.Reaction
if err := g.client.CheckServerVersionConstraint(">=1.11"); err != nil {
log.Info("GiteaDownloader: instance to old, skip getIssueReactions")
return reactions, nil
return []*base.Reaction{}, nil
}
rl, _, err := g.client.GetIssueReactions(g.repoOwner, g.repoName, index)
if err != nil {
return nil, err
}
for _, reaction := range rl {
reactions = append(reactions, &base.Reaction{
UserID: reaction.User.ID,
UserName: reaction.User.UserName,
Content: reaction.Reaction,
})
}
return reactions, nil
return g.convertReactions(rl), nil
}
func (g *GiteaDownloader) getCommentReactions(commentID int64) ([]*base.Reaction, error) {
var reactions []*base.Reaction
if err := g.client.CheckServerVersionConstraint(">=1.11"); err != nil {
log.Info("GiteaDownloader: instance to old, skip getCommentReactions")
return reactions, nil
return []*base.Reaction{}, nil
}
rl, _, err := g.client.GetIssueCommentReactions(g.repoOwner, g.repoName, commentID)
if err != nil {
return nil, err
}
return g.convertReactions(rl), nil
}
func (g *GiteaDownloader) convertReactions(rl []*gitea_sdk.Reaction) []*base.Reaction {
var reactions []*base.Reaction
for i := range rl {
if rl[i].User.ID <= 0 {
continue
}
reactions = append(reactions, &base.Reaction{
UserID: rl[i].User.ID,
UserName: rl[i].User.UserName,
Content: rl[i].Reaction,
})
}
return reactions, nil
return reactions
}
// GetIssues returns issues according start and limit
@@ -394,7 +395,11 @@ func (g *GiteaDownloader) GetIssues(page, perPage int) ([]*base.Issue, bool, err
reactions, err := g.getIssueReactions(issue.Index)
if err != nil {
return nil, false, fmt.Errorf("error while loading reactions: %v", err)
log.Warn("Unable to load reactions during migrating issue #%d to %s/%s. Error: %v", issue.Index, g.repoOwner, g.repoName, err)
if err2 := models.CreateRepositoryNotice(
fmt.Sprintf("Unable to load reactions during migrating issue #%d to %s/%s. Error: %v", issue.Index, g.repoOwner, g.repoName, err)); err2 != nil {
log.Error("create repository notice failed: ", err2)
}
}
var assignees []string
@@ -445,13 +450,17 @@ func (g *GiteaDownloader) GetComments(index int64) ([]*base.Comment, error) {
// Page: i,
}})
if err != nil {
return nil, fmt.Errorf("error while listing comments: %v", err)
return nil, fmt.Errorf("error while listing comments for issue #%d. Error: %v", index, err)
}
for _, comment := range comments {
reactions, err := g.getCommentReactions(comment.ID)
if err != nil {
return nil, fmt.Errorf("error while listing comment creactions: %v", err)
log.Warn("Unable to load comment reactions during migrating issue #%d for comment %d to %s/%s. Error: %v", index, comment.ID, g.repoOwner, g.repoName, err)
if err2 := models.CreateRepositoryNotice(
fmt.Sprintf("Unable to load reactions during migrating issue #%d for comment %d to %s/%s. Error: %v", index, comment.ID, g.repoOwner, g.repoName, err)); err2 != nil {
log.Error("create repository notice failed: ", err2)
}
}
allComments = append(allComments, &base.Comment{
@@ -489,7 +498,7 @@ func (g *GiteaDownloader) GetPullRequests(page, perPage int) ([]*base.PullReques
State: gitea_sdk.StateAll,
})
if err != nil {
return nil, false, fmt.Errorf("error while listing repos: %v", err)
return nil, false, fmt.Errorf("error while listing pull requests (page: %d, pagesize: %d). Error: %v", page, perPage, err)
}
for _, pr := range prs {
var milestone string
@@ -520,7 +529,7 @@ func (g *GiteaDownloader) GetPullRequests(page, perPage int) ([]*base.PullReques
if headSHA == "" {
headCommit, _, err := g.client.GetSingleCommit(g.repoOwner, g.repoName, url.PathEscape(pr.Head.Ref))
if err != nil {
return nil, false, fmt.Errorf("error while resolving git ref: %v", err)
return nil, false, fmt.Errorf("error while resolving head git ref: %s for pull #%d. Error: %v", pr.Head.Ref, pr.Index, err)
}
headSHA = headCommit.SHA
}
@@ -533,7 +542,11 @@ func (g *GiteaDownloader) GetPullRequests(page, perPage int) ([]*base.PullReques
reactions, err := g.getIssueReactions(pr.Index)
if err != nil {
return nil, false, fmt.Errorf("error while loading reactions: %v", err)
log.Warn("Unable to load reactions during migrating pull #%d to %s/%s. Error: %v", pr.Index, g.repoOwner, g.repoName, err)
if err2 := models.CreateRepositoryNotice(
fmt.Sprintf("Unable to load reactions during migrating pull #%d to %s/%s. Error: %v", pr.Index, g.repoOwner, g.repoName, err)); err2 != nil {
log.Error("create repository notice failed: ", err2)
}
}
var assignees []string

View File

@@ -28,6 +28,7 @@ import (
"code.gitea.io/gitea/modules/storage"
"code.gitea.io/gitea/modules/structs"
"code.gitea.io/gitea/modules/timeutil"
"code.gitea.io/gitea/services/pull"
gouuid "github.com/google/uuid"
)
@@ -124,7 +125,7 @@ func (g *GiteaLocalUploader) CreateRepo(repo *base.Repository, opts base.Migrate
}
r.DefaultBranch = repo.DefaultBranch
r, err = repository.MigrateRepositoryGitData(g.doer, owner, r, base.MigrateOptions{
r, err = repository.MigrateRepositoryGitData(g.ctx, owner, r, base.MigrateOptions{
RepoName: g.repoName,
Description: repo.Description,
OriginalURL: repo.OriginalURL,
@@ -153,6 +154,15 @@ func (g *GiteaLocalUploader) Close() {
// CreateTopics creates topics
func (g *GiteaLocalUploader) CreateTopics(topics ...string) error {
// ignore topics to long for the db
c := 0
for i := range topics {
if len(topics[i]) <= 25 {
topics[c] = topics[i]
c++
}
}
topics = topics[:c]
return models.SaveTopics(g.repo.ID, topics...)
}
@@ -524,6 +534,7 @@ func (g *GiteaLocalUploader) CreatePullRequests(prs ...*base.PullRequest) error
}
for _, pr := range gprs {
g.issues.Store(pr.Issue.Index, pr.Issue.ID)
pull.AddToTaskQueue(pr)
}
return nil
}

View File

@@ -65,23 +65,25 @@ func (f *GithubDownloaderV3Factory) GitServiceType() structs.GitServiceType {
// GithubDownloaderV3 implements a Downloader interface to get repository informations
// from github via APIv3
type GithubDownloaderV3 struct {
ctx context.Context
client *github.Client
repoOwner string
repoName string
userName string
password string
rate *github.Rate
ctx context.Context
client *github.Client
repoOwner string
repoName string
userName string
password string
rate *github.Rate
maxPerPage int
}
// NewGithubDownloaderV3 creates a github Downloader via github v3 API
func NewGithubDownloaderV3(ctx context.Context, baseURL, userName, password, token, repoOwner, repoName string) *GithubDownloaderV3 {
var downloader = GithubDownloaderV3{
userName: userName,
password: password,
ctx: ctx,
repoOwner: repoOwner,
repoName: repoName,
userName: userName,
password: password,
ctx: ctx,
repoOwner: repoOwner,
repoName: repoName,
maxPerPage: 100,
}
client := &http.Client{
@@ -177,7 +179,7 @@ func (g *GithubDownloaderV3) GetTopics() ([]string, error) {
// GetMilestones returns milestones
func (g *GithubDownloaderV3) GetMilestones() ([]*base.Milestone, error) {
var perPage = 100
var perPage = g.maxPerPage
var milestones = make([]*base.Milestone, 0, perPage)
for i := 1; ; i++ {
g.sleep()
@@ -233,7 +235,7 @@ func convertGithubLabel(label *github.Label) *base.Label {
// GetLabels returns labels
func (g *GithubDownloaderV3) GetLabels() ([]*base.Label, error) {
var perPage = 100
var perPage = g.maxPerPage
var labels = make([]*base.Label, 0, perPage)
for i := 1; ; i++ {
g.sleep()
@@ -304,7 +306,7 @@ func (g *GithubDownloaderV3) convertGithubRelease(rel *github.RepositoryRelease)
// GetReleases returns releases
func (g *GithubDownloaderV3) GetReleases() ([]*base.Release, error) {
var perPage = 100
var perPage = g.maxPerPage
var releases = make([]*base.Release, 0, perPage)
for i := 1; ; i++ {
g.sleep()
@@ -342,6 +344,9 @@ func (g *GithubDownloaderV3) GetAsset(_ string, _, id int64) (io.ReadCloser, err
// GetIssues returns issues according start and limit
func (g *GithubDownloaderV3) GetIssues(page, perPage int) ([]*base.Issue, bool, error) {
if perPage > g.maxPerPage {
perPage = g.maxPerPage
}
opt := &github.IssueListByRepoOptions{
Sort: "created",
Direction: "asc",
@@ -429,7 +434,7 @@ func (g *GithubDownloaderV3) GetIssues(page, perPage int) ([]*base.Issue, bool,
// GetComments returns comments according issueNumber
func (g *GithubDownloaderV3) GetComments(issueNumber int64) ([]*base.Comment, error) {
var (
allComments = make([]*base.Comment, 0, 100)
allComments = make([]*base.Comment, 0, g.maxPerPage)
created = "created"
asc = "asc"
)
@@ -437,7 +442,7 @@ func (g *GithubDownloaderV3) GetComments(issueNumber int64) ([]*base.Comment, er
Sort: &created,
Direction: &asc,
ListOptions: github.ListOptions{
PerPage: 100,
PerPage: g.maxPerPage,
},
}
for {
@@ -459,7 +464,7 @@ func (g *GithubDownloaderV3) GetComments(issueNumber int64) ([]*base.Comment, er
g.sleep()
res, resp, err := g.client.Reactions.ListIssueCommentReactions(g.ctx, g.repoOwner, g.repoName, comment.GetID(), &github.ListOptions{
Page: i,
PerPage: 100,
PerPage: g.maxPerPage,
})
if err != nil {
return nil, err
@@ -497,6 +502,9 @@ func (g *GithubDownloaderV3) GetComments(issueNumber int64) ([]*base.Comment, er
// GetPullRequests returns pull requests according page and perPage
func (g *GithubDownloaderV3) GetPullRequests(page, perPage int) ([]*base.PullRequest, bool, error) {
if perPage > g.maxPerPage {
perPage = g.maxPerPage
}
opt := &github.PullRequestListOptions{
Sort: "created",
Direction: "asc",
@@ -650,7 +658,7 @@ func (g *GithubDownloaderV3) convertGithubReviewComments(cs []*github.PullReques
g.sleep()
res, resp, err := g.client.Reactions.ListPullRequestCommentReactions(g.ctx, g.repoOwner, g.repoName, c.GetID(), &github.ListOptions{
Page: i,
PerPage: 100,
PerPage: g.maxPerPage,
})
if err != nil {
return nil, err
@@ -687,9 +695,9 @@ func (g *GithubDownloaderV3) convertGithubReviewComments(cs []*github.PullReques
// GetReviews returns pull requests review
func (g *GithubDownloaderV3) GetReviews(pullRequestNumber int64) ([]*base.Review, error) {
var allReviews = make([]*base.Review, 0, 100)
var allReviews = make([]*base.Review, 0, g.maxPerPage)
opt := &github.ListOptions{
PerPage: 100,
PerPage: g.maxPerPage,
}
for {
g.sleep()
@@ -703,7 +711,7 @@ func (g *GithubDownloaderV3) GetReviews(pullRequestNumber int64) ([]*base.Review
r.IssueIndex = pullRequestNumber
// retrieve all review comments
opt2 := &github.ListOptions{
PerPage: 100,
PerPage: g.maxPerPage,
}
for {
g.sleep()

View File

@@ -11,6 +11,7 @@ import (
"io"
"net/http"
"net/url"
"path"
"strings"
"time"
@@ -68,6 +69,7 @@ type GitlabDownloader struct {
repoName string
issueCount int64
fetchPRcomments bool
maxPerPage int
}
// NewGitlabDownloader creates a gitlab Downloader via gitlab API
@@ -86,6 +88,30 @@ func NewGitlabDownloader(ctx context.Context, baseURL, repoPath, username, passw
return nil, err
}
// split namespace and subdirectory
pathParts := strings.Split(strings.Trim(repoPath, "/"), "/")
var resp *gitlab.Response
u, _ := url.Parse(baseURL)
for len(pathParts) >= 2 {
_, resp, err = gitlabClient.Version.GetVersion()
if err == nil || resp != nil && resp.StatusCode == 401 {
err = nil // if no authentication given, this still should work
break
}
u.Path = path.Join(u.Path, pathParts[0])
baseURL = u.String()
pathParts = pathParts[1:]
_ = gitlab.WithBaseURL(baseURL)(gitlabClient)
repoPath = strings.Join(pathParts, "/")
}
if err != nil {
log.Trace("Error could not get gitlab version: %v", err)
return nil, err
}
log.Trace("gitlab downloader: use BaseURL: '%s' and RepoPath: '%s'", baseURL, repoPath)
// Grab and store project/repo ID here, due to issues using the URL escaped path
gr, _, err := gitlabClient.Projects.GetProject(repoPath, nil, nil, gitlab.WithContext(ctx))
if err != nil {
@@ -99,10 +125,11 @@ func NewGitlabDownloader(ctx context.Context, baseURL, repoPath, username, passw
}
return &GitlabDownloader{
ctx: ctx,
client: gitlabClient,
repoID: gr.ID,
repoName: gr.Name,
ctx: ctx,
client: gitlabClient,
repoID: gr.ID,
repoName: gr.Name,
maxPerPage: 100,
}, nil
}
@@ -159,7 +186,7 @@ func (g *GitlabDownloader) GetTopics() ([]string, error) {
// GetMilestones returns milestones
func (g *GitlabDownloader) GetMilestones() ([]*base.Milestone, error) {
var perPage = 100
var perPage = g.maxPerPage
var state = "all"
var milestones = make([]*base.Milestone, 0, perPage)
for i := 1; ; i++ {
@@ -230,7 +257,7 @@ func (g *GitlabDownloader) normalizeColor(val string) string {
// GetLabels returns labels
func (g *GitlabDownloader) GetLabels() ([]*base.Label, error) {
var perPage = 100
var perPage = g.maxPerPage
var labels = make([]*base.Label, 0, perPage)
for i := 1; ; i++ {
ls, _, err := g.client.Labels.ListLabels(g.repoID, &gitlab.ListLabelsOptions{ListOptions: gitlab.ListOptions{
@@ -281,7 +308,7 @@ func (g *GitlabDownloader) convertGitlabRelease(rel *gitlab.Release) *base.Relea
// GetReleases returns releases
func (g *GitlabDownloader) GetReleases() ([]*base.Release, error) {
var perPage = 100
var perPage = g.maxPerPage
var releases = make([]*base.Release, 0, perPage)
for i := 1; ; i++ {
ls, _, err := g.client.Releases.ListReleases(g.repoID, &gitlab.ListReleasesOptions{
@@ -330,6 +357,10 @@ func (g *GitlabDownloader) GetIssues(page, perPage int) ([]*base.Issue, bool, er
state := "all"
sort := "asc"
if perPage > g.maxPerPage {
perPage = g.maxPerPage
}
opt := &gitlab.ListProjectIssuesOptions{
State: &state,
Sort: &sort,
@@ -401,7 +432,7 @@ func (g *GitlabDownloader) GetIssues(page, perPage int) ([]*base.Issue, bool, er
// GetComments returns comments according issueNumber
// TODO: figure out how to transfer comment reactions
func (g *GitlabDownloader) GetComments(issueNumber int64) ([]*base.Comment, error) {
var allComments = make([]*base.Comment, 0, 100)
var allComments = make([]*base.Comment, 0, g.maxPerPage)
var page = 1
var realIssueNumber int64
@@ -415,14 +446,14 @@ func (g *GitlabDownloader) GetComments(issueNumber int64) ([]*base.Comment, erro
realIssueNumber = issueNumber
comments, resp, err = g.client.Discussions.ListIssueDiscussions(g.repoID, int(realIssueNumber), &gitlab.ListIssueDiscussionsOptions{
Page: page,
PerPage: 100,
PerPage: g.maxPerPage,
}, nil, gitlab.WithContext(g.ctx))
} else {
// If this is a PR, we need to figure out the Gitlab/original PR ID to be passed below
realIssueNumber = issueNumber - g.issueCount
comments, resp, err = g.client.Discussions.ListMergeRequestDiscussions(g.repoID, int(realIssueNumber), &gitlab.ListMergeRequestDiscussionsOptions{
Page: page,
PerPage: 100,
PerPage: g.maxPerPage,
}, nil, gitlab.WithContext(g.ctx))
}
@@ -465,6 +496,10 @@ func (g *GitlabDownloader) GetComments(issueNumber int64) ([]*base.Comment, erro
// GetPullRequests returns pull requests according page and perPage
func (g *GitlabDownloader) GetPullRequests(page, perPage int) ([]*base.PullRequest, bool, error) {
if perPage > g.maxPerPage {
perPage = g.maxPerPage
}
opt := &gitlab.ListProjectMergeRequestsOptions{
ListOptions: gitlab.ListOptions{
PerPage: perPage,
@@ -574,8 +609,12 @@ func (g *GitlabDownloader) GetPullRequests(page, perPage int) ([]*base.PullReque
// GetReviews returns pull requests review
func (g *GitlabDownloader) GetReviews(pullRequestNumber int64) ([]*base.Review, error) {
state, _, err := g.client.MergeRequestApprovals.GetApprovalState(g.repoID, int(pullRequestNumber), gitlab.WithContext(g.ctx))
state, resp, err := g.client.MergeRequestApprovals.GetApprovalState(g.repoID, int(pullRequestNumber), gitlab.WithContext(g.ctx))
if err != nil {
if resp != nil && resp.StatusCode == 404 {
log.Error(fmt.Sprintf("GitlabDownloader: while migrating a error occurred: '%s'", err.Error()))
return []*base.Review{}, nil
}
return nil, err
}

View File

@@ -8,9 +8,13 @@ package migrations
import (
"context"
"fmt"
"net"
"net/url"
"strings"
"code.gitea.io/gitea/models"
"code.gitea.io/gitea/modules/log"
"code.gitea.io/gitea/modules/matchlist"
"code.gitea.io/gitea/modules/migrations/base"
"code.gitea.io/gitea/modules/setting"
)
@@ -20,6 +24,9 @@ type MigrateOptions = base.MigrateOptions
var (
factories []base.DownloaderFactory
allowList *matchlist.Matchlist
blockList *matchlist.Matchlist
)
// RegisterDownloaderFactory registers a downloader factory
@@ -27,12 +34,49 @@ func RegisterDownloaderFactory(factory base.DownloaderFactory) {
factories = append(factories, factory)
}
func isMigrateURLAllowed(remoteURL string) error {
u, err := url.Parse(strings.ToLower(remoteURL))
if err != nil {
return err
}
if strings.EqualFold(u.Scheme, "http") || strings.EqualFold(u.Scheme, "https") {
if len(setting.Migrations.AllowedDomains) > 0 {
if !allowList.Match(u.Host) {
return &models.ErrMigrationNotAllowed{Host: u.Host}
}
} else {
if blockList.Match(u.Host) {
return &models.ErrMigrationNotAllowed{Host: u.Host}
}
}
}
if !setting.Migrations.AllowLocalNetworks {
addrList, err := net.LookupIP(strings.Split(u.Host, ":")[0])
if err != nil {
return &models.ErrMigrationNotAllowed{Host: u.Host, NotResolvedIP: true}
}
for _, addr := range addrList {
if isIPPrivate(addr) || !addr.IsGlobalUnicast() {
return &models.ErrMigrationNotAllowed{Host: u.Host, PrivateNet: addr.String()}
}
}
}
return nil
}
// MigrateRepository migrate repository according MigrateOptions
func MigrateRepository(ctx context.Context, doer *models.User, ownerName string, opts base.MigrateOptions) (*models.Repository, error) {
err := isMigrateURLAllowed(opts.CloneAddr)
if err != nil {
return nil, err
}
var (
downloader base.Downloader
uploader = NewGiteaLocalUploader(ctx, doer, ownerName, opts.RepoName)
err error
)
for _, factory := range factories {
@@ -69,7 +113,7 @@ func MigrateRepository(ctx context.Context, doer *models.User, ownerName string,
}
if err2 := models.CreateRepositoryNotice(fmt.Sprintf("Migrate repository from %s failed: %v", opts.OriginalURL, err)); err2 != nil {
log.Error("create respotiry notice failed: ", err2)
log.Error("create repository notice failed: ", err2)
}
return nil, err
}
@@ -308,3 +352,32 @@ func migrateRepository(downloader base.Downloader, uploader base.Uploader, opts
return nil
}
// Init migrations service
func Init() error {
var err error
allowList, err = matchlist.NewMatchlist(setting.Migrations.AllowedDomains...)
if err != nil {
return fmt.Errorf("init migration allowList domains failed: %v", err)
}
blockList, err = matchlist.NewMatchlist(setting.Migrations.BlockedDomains...)
if err != nil {
return fmt.Errorf("init migration blockList domains failed: %v", err)
}
return nil
}
// isIPPrivate reports whether ip is a private address, according to
// RFC 1918 (IPv4 addresses) and RFC 4193 (IPv6 addresses).
// from https://github.com/golang/go/pull/42793
// TODO remove if https://github.com/golang/go/issues/29146 got resolved
func isIPPrivate(ip net.IP) bool {
if ip4 := ip.To4(); ip4 != nil {
return ip4[0] == 10 ||
(ip4[0] == 172 && ip4[1]&0xf0 == 16) ||
(ip4[0] == 192 && ip4[1] == 168)
}
return len(ip) == net.IPv6len && ip[0]&0xfe == 0xfc
}

View File

@@ -0,0 +1,34 @@
// Copyright 2019 The Gitea Authors. All rights reserved.
// Use of this source code is governed by a MIT-style
// license that can be found in the LICENSE file.
package migrations
import (
"testing"
"code.gitea.io/gitea/modules/setting"
"github.com/stretchr/testify/assert"
)
func TestMigrateWhiteBlocklist(t *testing.T) {
setting.Migrations.AllowedDomains = []string{"github.com"}
assert.NoError(t, Init())
err := isMigrateURLAllowed("https://gitlab.com/gitlab/gitlab.git")
assert.Error(t, err)
err = isMigrateURLAllowed("https://github.com/go-gitea/gitea.git")
assert.NoError(t, err)
setting.Migrations.AllowedDomains = []string{}
setting.Migrations.BlockedDomains = []string{"github.com"}
assert.NoError(t, Init())
err = isMigrateURLAllowed("https://gitlab.com/gitlab/gitlab.git")
assert.NoError(t, err)
err = isMigrateURLAllowed("https://github.com/go-gitea/gitea.git")
assert.Error(t, err)
}

View File

@@ -314,7 +314,7 @@ func (a *actionNotifier) NotifySyncDeleteRef(doer *models.User, repo *models.Rep
if err := models.NotifyWatchers(&models.Action{
ActUserID: repo.OwnerID,
ActUser: repo.MustOwner(),
OpType: models.ActionMirrorSyncCreate,
OpType: models.ActionMirrorSyncDelete,
RepoID: repo.ID,
Repo: repo,
IsPrivate: repo.IsPrivate,

View File

@@ -797,3 +797,11 @@ func (m *webhookNotifier) NotifySyncPushCommits(pusher *models.User, repo *model
log.Error("PrepareWebhooks: %v", err)
}
}
func (m *webhookNotifier) NotifySyncCreateRef(pusher *models.User, repo *models.Repository, refType, refFullName string) {
m.NotifyCreateRef(pusher, repo, refType, refFullName)
}
func (m *webhookNotifier) NotifySyncDeleteRef(pusher *models.User, repo *models.Repository, refType, refFullName string) {
m.NotifyDeleteRef(pusher, repo, refType, refFullName)
}

View File

@@ -235,40 +235,78 @@ func findAllIssueReferencesMarkdown(content string) []*rawReference {
return findAllIssueReferencesBytes(bcontent, links)
}
func convertFullHTMLReferencesToShortRefs(re *regexp.Regexp, contentBytes *[]byte) {
// We will iterate through the content, rewrite and simplify full references.
//
// We want to transform something like:
//
// this is a https://ourgitea.com/git/owner/repo/issues/123456789, foo
// https://ourgitea.com/git/owner/repo/pulls/123456789
//
// Into something like:
//
// this is a #123456789, foo
// !123456789
pos := 0
for {
// re looks for something like: (\s|^|\(|\[)https://ourgitea.com/git/(owner/repo)/(issues)/(123456789)(?:\s|$|\)|\]|[:;,.?!]\s|[:;,.?!]$)
match := re.FindSubmatchIndex((*contentBytes)[pos:])
if match == nil {
break
}
// match is a bunch of indices into the content from pos onwards so
// to simplify things let's just add pos to all of the indices in match
for i := range match {
match[i] += pos
}
// match[0]-match[1] is whole string
// match[2]-match[3] is preamble
// move the position to the end of the preamble
pos = match[3]
// match[4]-match[5] is owner/repo
// now copy the owner/repo to end of the preamble
endPos := pos + match[5] - match[4]
copy((*contentBytes)[pos:endPos], (*contentBytes)[match[4]:match[5]])
// move the current position to the end of the newly copied owner/repo
pos = endPos
// Now set the issue/pull marker:
//
// match[6]-match[7] == 'issues'
(*contentBytes)[pos] = '#'
if string((*contentBytes)[match[6]:match[7]]) == "pulls" {
(*contentBytes)[pos] = '!'
}
pos++
// Then add the issue/pull number
//
// match[8]-match[9] is the number
endPos = pos + match[9] - match[8]
copy((*contentBytes)[pos:endPos], (*contentBytes)[match[8]:match[9]])
// Now copy what's left at the end of the string to the new end position
copy((*contentBytes)[endPos:], (*contentBytes)[match[9]:])
// now we reset the length
// our new section has length endPos - match[3]
// our old section has length match[9] - match[3]
(*contentBytes) = (*contentBytes)[:len((*contentBytes))-match[9]+endPos]
pos = endPos
}
}
// FindAllIssueReferences returns a list of unvalidated references found in a string.
func FindAllIssueReferences(content string) []IssueReference {
// Need to convert fully qualified html references to local system to #/! short codes
contentBytes := []byte(content)
if re := getGiteaIssuePullPattern(); re != nil {
pos := 0
for {
match := re.FindSubmatchIndex(contentBytes[pos:])
if match == nil {
break
}
// match[0]-match[1] is whole string
// match[2]-match[3] is preamble
pos += match[3]
// match[4]-match[5] is owner/repo
endPos := pos + match[5] - match[4]
copy(contentBytes[pos:endPos], contentBytes[match[4]:match[5]])
pos = endPos
// match[6]-match[7] == 'issues'
contentBytes[pos] = '#'
if string(contentBytes[match[6]:match[7]]) == "pulls" {
contentBytes[pos] = '!'
}
pos++
// match[8]-match[9] is the number
endPos = pos + match[9] - match[8]
copy(contentBytes[pos:endPos], contentBytes[match[8]:match[9]])
copy(contentBytes[endPos:], contentBytes[match[9]:])
// now we reset the length
// our new section has length endPos - match[3]
// our old section has length match[9] - match[3]
contentBytes = contentBytes[:len(contentBytes)-match[9]+endPos]
pos = endPos
}
convertFullHTMLReferencesToShortRefs(re, &contentBytes)
} else {
log.Debug("No GiteaIssuePullPattern pattern")
}

View File

@@ -5,6 +5,7 @@
package references
import (
"regexp"
"testing"
"code.gitea.io/gitea/modules/setting"
@@ -29,6 +30,26 @@ type testResult struct {
TimeLog string
}
func TestConvertFullHTMLReferencesToShortRefs(t *testing.T) {
re := regexp.MustCompile(`(\s|^|\(|\[)` +
regexp.QuoteMeta("https://ourgitea.com/git/") +
`([0-9a-zA-Z-_\.]+/[0-9a-zA-Z-_\.]+)/` +
`((?:issues)|(?:pulls))/([0-9]+)(?:\s|$|\)|\]|[:;,.?!]\s|[:;,.?!]$)`)
test := `this is a https://ourgitea.com/git/owner/repo/issues/123456789, foo
https://ourgitea.com/git/owner/repo/pulls/123456789
And https://ourgitea.com/git/owner/repo/pulls/123
`
expect := `this is a owner/repo#123456789, foo
owner/repo!123456789
And owner/repo!123
`
contentBytes := []byte(test)
convertFullHTMLReferencesToShortRefs(re, &contentBytes)
result := string(contentBytes)
assert.EqualValues(t, expect, result)
}
func TestFindAllIssueReferences(t *testing.T) {
fixtures := []testFixture{
@@ -106,6 +127,13 @@ func TestFindAllIssueReferences(t *testing.T) {
{202, "user4", "repo5", "202", true, XRefActionNone, nil, nil, ""},
},
},
{
"This http://gitea.com:3000/user4/repo5/pulls/202 yes. http://gitea.com:3000/user4/repo5/pulls/203 no",
[]testResult{
{202, "user4", "repo5", "202", true, XRefActionNone, nil, nil, ""},
{203, "user4", "repo5", "203", true, XRefActionNone, nil, nil, ""},
},
},
{
"This http://GiTeA.COM:3000/user4/repo6/pulls/205 yes.",
[]testResult{

View File

@@ -162,10 +162,10 @@ func initRepoCommit(tmpPath string, repo *models.Repository, u *models.User, def
defaultBranch = setting.Repository.DefaultBranch
}
if stdout, err := git.NewCommand("push", "origin", "master:"+defaultBranch).
if stdout, err := git.NewCommand("push", "origin", "HEAD:"+defaultBranch).
SetDescription(fmt.Sprintf("initRepoCommit (git push): %s", tmpPath)).
RunInDirWithEnv(tmpPath, models.InternalPushingEnvironment(u, repo)); err != nil {
log.Error("Failed to push back to master: Stdout: %s\nError: %v", stdout, err)
log.Error("Failed to push back to HEAD: Stdout: %s\nError: %v", stdout, err)
return fmt.Errorf("git push: %v", err)
}

View File

@@ -5,6 +5,7 @@
package repository
import (
"context"
"fmt"
"path"
"strings"
@@ -41,7 +42,7 @@ func WikiRemoteURL(remote string) string {
}
// MigrateRepositoryGitData starts migrating git related data after created migrating repository
func MigrateRepositoryGitData(doer, u *models.User, repo *models.Repository, opts migration.MigrateOptions) (*models.Repository, error) {
func MigrateRepositoryGitData(ctx context.Context, u *models.User, repo *models.Repository, opts migration.MigrateOptions) (*models.Repository, error) {
repoPath := models.RepoPath(u.Name, opts.RepoName)
if u.IsOrganization() {
@@ -61,7 +62,7 @@ func MigrateRepositoryGitData(doer, u *models.User, repo *models.Repository, opt
return repo, fmt.Errorf("Failed to remove %s: %v", repoPath, err)
}
if err = git.Clone(opts.CloneAddr, repoPath, git.CloneRepoOptions{
if err = git.CloneWithContext(ctx, opts.CloneAddr, repoPath, git.CloneRepoOptions{
Mirror: true,
Quiet: true,
Timeout: migrateTimeout,
@@ -77,7 +78,7 @@ func MigrateRepositoryGitData(doer, u *models.User, repo *models.Repository, opt
return repo, fmt.Errorf("Failed to remove %s: %v", wikiPath, err)
}
if err = git.Clone(wikiRemotePath, wikiPath, git.CloneRepoOptions{
if err = git.CloneWithContext(ctx, wikiRemotePath, wikiPath, git.CloneRepoOptions{
Mirror: true,
Quiet: true,
Timeout: migrateTimeout,

View File

@@ -62,6 +62,11 @@ func InitDBConfig() {
sec := Cfg.Section("database")
Database.Type = sec.Key("DB_TYPE").String()
defaultCharset := "utf8"
Database.UseMySQL = false
Database.UseSQLite3 = false
Database.UsePostgreSQL = false
Database.UseMSSQL = false
switch Database.Type {
case "sqlite3":
Database.UseSQLite3 = true

View File

@@ -4,11 +4,18 @@
package setting
import (
"strings"
)
var (
// Migrations settings
Migrations = struct {
MaxAttempts int
RetryBackoff int
MaxAttempts int
RetryBackoff int
AllowedDomains []string
BlockedDomains []string
AllowLocalNetworks bool
}{
MaxAttempts: 3,
RetryBackoff: 3,
@@ -19,4 +26,15 @@ func newMigrationsService() {
sec := Cfg.Section("migrations")
Migrations.MaxAttempts = sec.Key("MAX_ATTEMPTS").MustInt(Migrations.MaxAttempts)
Migrations.RetryBackoff = sec.Key("RETRY_BACKOFF").MustInt(Migrations.RetryBackoff)
Migrations.AllowedDomains = sec.Key("ALLOWED_DOMAINS").Strings(",")
for i := range Migrations.AllowedDomains {
Migrations.AllowedDomains[i] = strings.ToLower(Migrations.AllowedDomains[i])
}
Migrations.BlockedDomains = sec.Key("BLOCKED_DOMAINS").Strings(",")
for i := range Migrations.BlockedDomains {
Migrations.BlockedDomains[i] = strings.ToLower(Migrations.BlockedDomains[i])
}
Migrations.AllowLocalNetworks = sec.Key("ALLOW_LOCALNETWORKS").MustBool(false)
}

View File

@@ -143,7 +143,7 @@ var (
MaxCreationLimit: -1,
MirrorQueueLength: 1000,
PullRequestQueueLength: 1000,
PreferredLicenses: []string{"Apache License 2.0,MIT License"},
PreferredLicenses: []string{"Apache License 2.0", "MIT License"},
DisableHTTPGit: false,
AccessControlAllowOrigin: "",
UseCompatSSHURI: false,

View File

@@ -21,7 +21,7 @@ type Storage struct {
// MapTo implements the Mappable interface
func (s *Storage) MapTo(v interface{}) error {
pathValue := reflect.ValueOf(v).FieldByName("Path")
pathValue := reflect.ValueOf(v).Elem().FieldByName("Path")
if pathValue.IsValid() && pathValue.Kind() == reflect.String {
pathValue.SetString(s.Path)
}
@@ -31,24 +31,10 @@ func (s *Storage) MapTo(v interface{}) error {
return nil
}
func getStorage(name, typ string, overrides ...*ini.Section) Storage {
sectionName := "storage"
if len(name) > 0 {
sectionName = sectionName + "." + typ
}
func getStorage(name, typ string, targetSec *ini.Section) Storage {
const sectionName = "storage"
sec := Cfg.Section(sectionName)
if len(overrides) == 0 {
overrides = []*ini.Section{
Cfg.Section(sectionName + "." + name),
}
}
var storage Storage
storage.Type = sec.Key("STORAGE_TYPE").MustString("")
storage.ServeDirect = sec.Key("SERVE_DIRECT").MustBool(false)
// Global Defaults
sec.Key("MINIO_ENDPOINT").MustString("localhost:9000")
sec.Key("MINIO_ACCESS_KEY_ID").MustString("")
@@ -57,17 +43,37 @@ func getStorage(name, typ string, overrides ...*ini.Section) Storage {
sec.Key("MINIO_LOCATION").MustString("us-east-1")
sec.Key("MINIO_USE_SSL").MustBool(false)
storage.Section = sec
var storage Storage
storage.Section = targetSec
storage.Type = typ
overrides := make([]*ini.Section, 0, 3)
nameSec, err := Cfg.GetSection(sectionName + "." + name)
if err == nil {
overrides = append(overrides, nameSec)
}
typeSec, err := Cfg.GetSection(sectionName + "." + typ)
if err == nil {
overrides = append(overrides, typeSec)
nextType := typeSec.Key("STORAGE_TYPE").String()
if len(nextType) > 0 {
storage.Type = nextType // Support custom STORAGE_TYPE
}
}
overrides = append(overrides, sec)
for _, override := range overrides {
for _, key := range storage.Section.Keys() {
if !override.HasKey(key.Name()) {
_, _ = override.NewKey(key.Name(), key.Value())
for _, key := range override.Keys() {
if !targetSec.HasKey(key.Name()) {
_, _ = targetSec.NewKey(key.Name(), key.Value())
}
}
storage.ServeDirect = override.Key("SERVE_DIRECT").MustBool(false)
storage.Section = override
if len(storage.Type) == 0 {
storage.Type = override.Key("STORAGE_TYPE").String()
}
}
storage.ServeDirect = storage.Section.Key("SERVE_DIRECT").MustBool(false)
// Specific defaults
storage.Path = storage.Section.Key("PATH").MustString(filepath.Join(AppDataPath, name))

View File

@@ -0,0 +1,197 @@
// Copyright 2020 The Gitea Authors. All rights reserved.
// Use of this source code is governed by a MIT-style
// license that can be found in the LICENSE file.
package setting
import (
"testing"
"github.com/stretchr/testify/assert"
ini "gopkg.in/ini.v1"
)
func Test_getStorageCustomType(t *testing.T) {
iniStr := `
[attachment]
STORAGE_TYPE = my_minio
MINIO_BUCKET = gitea-attachment
[storage.my_minio]
STORAGE_TYPE = minio
MINIO_ENDPOINT = my_minio:9000
`
Cfg, _ = ini.Load([]byte(iniStr))
sec := Cfg.Section("attachment")
storageType := sec.Key("STORAGE_TYPE").MustString("")
storage := getStorage("attachments", storageType, sec)
assert.EqualValues(t, "minio", storage.Type)
assert.EqualValues(t, "my_minio:9000", storage.Section.Key("MINIO_ENDPOINT").String())
assert.EqualValues(t, "gitea-attachment", storage.Section.Key("MINIO_BUCKET").String())
}
func Test_getStorageNameSectionOverridesTypeSection(t *testing.T) {
iniStr := `
[attachment]
STORAGE_TYPE = minio
[storage.attachments]
MINIO_BUCKET = gitea-attachment
[storage.minio]
MINIO_BUCKET = gitea
`
Cfg, _ = ini.Load([]byte(iniStr))
sec := Cfg.Section("attachment")
storageType := sec.Key("STORAGE_TYPE").MustString("")
storage := getStorage("attachments", storageType, sec)
assert.EqualValues(t, "minio", storage.Type)
assert.EqualValues(t, "gitea-attachment", storage.Section.Key("MINIO_BUCKET").String())
}
func Test_getStorageTypeSectionOverridesStorageSection(t *testing.T) {
iniStr := `
[attachment]
STORAGE_TYPE = minio
[storage.minio]
MINIO_BUCKET = gitea-minio
[storage]
MINIO_BUCKET = gitea
`
Cfg, _ = ini.Load([]byte(iniStr))
sec := Cfg.Section("attachment")
storageType := sec.Key("STORAGE_TYPE").MustString("")
storage := getStorage("attachments", storageType, sec)
assert.EqualValues(t, "minio", storage.Type)
assert.EqualValues(t, "gitea-minio", storage.Section.Key("MINIO_BUCKET").String())
}
func Test_getStorageSpecificOverridesStorage(t *testing.T) {
iniStr := `
[attachment]
STORAGE_TYPE = minio
MINIO_BUCKET = gitea-attachment
[storage.attachments]
MINIO_BUCKET = gitea
[storage]
STORAGE_TYPE = local
`
Cfg, _ = ini.Load([]byte(iniStr))
sec := Cfg.Section("attachment")
storageType := sec.Key("STORAGE_TYPE").MustString("")
storage := getStorage("attachments", storageType, sec)
assert.EqualValues(t, "minio", storage.Type)
assert.EqualValues(t, "gitea-attachment", storage.Section.Key("MINIO_BUCKET").String())
}
func Test_getStorageGetDefaults(t *testing.T) {
Cfg, _ = ini.Load([]byte(""))
sec := Cfg.Section("attachment")
storageType := sec.Key("STORAGE_TYPE").MustString("")
storage := getStorage("attachments", storageType, sec)
assert.EqualValues(t, "gitea", storage.Section.Key("MINIO_BUCKET").String())
}
func Test_getStorageMultipleName(t *testing.T) {
iniStr := `
[lfs]
MINIO_BUCKET = gitea-lfs
[attachment]
MINIO_BUCKET = gitea-attachment
[storage]
MINIO_BUCKET = gitea-storage
`
Cfg, _ = ini.Load([]byte(iniStr))
{
sec := Cfg.Section("attachment")
storageType := sec.Key("STORAGE_TYPE").MustString("")
storage := getStorage("attachments", storageType, sec)
assert.EqualValues(t, "gitea-attachment", storage.Section.Key("MINIO_BUCKET").String())
}
{
sec := Cfg.Section("lfs")
storageType := sec.Key("STORAGE_TYPE").MustString("")
storage := getStorage("lfs", storageType, sec)
assert.EqualValues(t, "gitea-lfs", storage.Section.Key("MINIO_BUCKET").String())
}
{
sec := Cfg.Section("avatar")
storageType := sec.Key("STORAGE_TYPE").MustString("")
storage := getStorage("avatars", storageType, sec)
assert.EqualValues(t, "gitea-storage", storage.Section.Key("MINIO_BUCKET").String())
}
}
func Test_getStorageUseOtherNameAsType(t *testing.T) {
iniStr := `
[attachment]
STORAGE_TYPE = lfs
[storage.lfs]
MINIO_BUCKET = gitea-storage
`
Cfg, _ = ini.Load([]byte(iniStr))
{
sec := Cfg.Section("attachment")
storageType := sec.Key("STORAGE_TYPE").MustString("")
storage := getStorage("attachments", storageType, sec)
assert.EqualValues(t, "gitea-storage", storage.Section.Key("MINIO_BUCKET").String())
}
{
sec := Cfg.Section("lfs")
storageType := sec.Key("STORAGE_TYPE").MustString("")
storage := getStorage("lfs", storageType, sec)
assert.EqualValues(t, "gitea-storage", storage.Section.Key("MINIO_BUCKET").String())
}
}
func Test_getStorageInheritStorageType(t *testing.T) {
iniStr := `
[storage]
STORAGE_TYPE = minio
`
Cfg, _ = ini.Load([]byte(iniStr))
sec := Cfg.Section("attachment")
storageType := sec.Key("STORAGE_TYPE").MustString("")
storage := getStorage("attachments", storageType, sec)
assert.EqualValues(t, "minio", storage.Type)
}
func Test_getStorageInheritNameSectionType(t *testing.T) {
iniStr := `
[storage.attachments]
STORAGE_TYPE = minio
`
Cfg, _ = ini.Load([]byte(iniStr))
sec := Cfg.Section("attachment")
storageType := sec.Key("STORAGE_TYPE").MustString("")
storage := getStorage("attachments", storageType, sec)
assert.EqualValues(t, "minio", storage.Type)
}

View File

@@ -11,6 +11,7 @@ import (
"os"
"path/filepath"
"code.gitea.io/gitea/modules/log"
"code.gitea.io/gitea/modules/util"
)
@@ -39,7 +40,7 @@ func NewLocalStorage(ctx context.Context, cfg interface{}) (ObjectStorage, error
return nil, err
}
config := configInterface.(LocalStorageConfig)
log.Info("Creating new Local Storage at %s", config.Path)
if err := os.MkdirAll(config.Path, os.ModePerm); err != nil {
return nil, err
}

View File

@@ -13,6 +13,7 @@ import (
"strings"
"time"
"code.gitea.io/gitea/modules/log"
"github.com/minio/minio-go/v7"
"github.com/minio/minio-go/v7/pkg/credentials"
)
@@ -30,7 +31,7 @@ type minioObject struct {
func (m *minioObject) Stat() (os.FileInfo, error) {
oi, err := m.Object.Stat()
if err != nil {
return nil, err
return nil, convertMinioErr(err)
}
return &minioFileInfo{oi}, nil
@@ -58,20 +59,41 @@ type MinioStorage struct {
basePath string
}
func convertMinioErr(err error) error {
if err == nil {
return nil
}
errResp, ok := err.(minio.ErrorResponse)
if !ok {
return err
}
// Convert two responses to standard analogues
switch errResp.Code {
case "NoSuchKey":
return os.ErrNotExist
case "AccessDenied":
return os.ErrPermission
}
return err
}
// NewMinioStorage returns a minio storage
func NewMinioStorage(ctx context.Context, cfg interface{}) (ObjectStorage, error) {
configInterface, err := toConfig(MinioStorageConfig{}, cfg)
if err != nil {
return nil, err
return nil, convertMinioErr(err)
}
config := configInterface.(MinioStorageConfig)
log.Info("Creating Minio storage at %s:%s with base path %s", config.Endpoint, config.Bucket, config.BasePath)
minioClient, err := minio.New(config.Endpoint, &minio.Options{
Creds: credentials.NewStaticV4(config.AccessKeyID, config.SecretAccessKey, ""),
Secure: config.UseSSL,
})
if err != nil {
return nil, err
return nil, convertMinioErr(err)
}
if err := minioClient.MakeBucket(ctx, config.Bucket, minio.MakeBucketOptions{
@@ -80,7 +102,7 @@ func NewMinioStorage(ctx context.Context, cfg interface{}) (ObjectStorage, error
// Check to see if we already own this bucket (which happens if you run this twice)
exists, errBucketExists := minioClient.BucketExists(ctx, config.Bucket)
if !exists || errBucketExists != nil {
return nil, err
return nil, convertMinioErr(err)
}
}
@@ -101,7 +123,7 @@ func (m *MinioStorage) Open(path string) (Object, error) {
var opts = minio.GetObjectOptions{}
object, err := m.client.GetObject(m.ctx, m.bucket, m.buildMinioPath(path), opts)
if err != nil {
return nil, err
return nil, convertMinioErr(err)
}
return &minioObject{object}, nil
}
@@ -117,7 +139,7 @@ func (m *MinioStorage) Save(path string, r io.Reader) (int64, error) {
minio.PutObjectOptions{ContentType: "application/octet-stream"},
)
if err != nil {
return 0, err
return 0, convertMinioErr(err)
}
return uploadInfo.Size, nil
}
@@ -164,14 +186,17 @@ func (m *MinioStorage) Stat(path string) (os.FileInfo, error) {
return nil, os.ErrNotExist
}
}
return nil, err
return nil, convertMinioErr(err)
}
return &minioFileInfo{info}, nil
}
// Delete delete a file
func (m *MinioStorage) Delete(path string) error {
return m.client.RemoveObject(m.ctx, m.bucket, m.buildMinioPath(path), minio.RemoveObjectOptions{})
if err := m.client.RemoveObject(m.ctx, m.bucket, m.buildMinioPath(path), minio.RemoveObjectOptions{}); err != nil {
return convertMinioErr(err)
}
return nil
}
// URL gets the redirect URL to a file. The presigned link is valid for 5 minutes.
@@ -179,7 +204,8 @@ func (m *MinioStorage) URL(path, name string) (*url.URL, error) {
reqParams := make(url.Values)
// TODO it may be good to embed images with 'inline' like ServeData does, but we don't want to have to read the file, do we?
reqParams.Set("response-content-disposition", "attachment; filename=\""+quoteEscaper.Replace(name)+"\"")
return m.client.PresignedGetObject(m.ctx, m.bucket, m.buildMinioPath(path), 5*time.Minute, reqParams)
u, err := m.client.PresignedGetObject(m.ctx, m.bucket, m.buildMinioPath(path), 5*time.Minute, reqParams)
return u, convertMinioErr(err)
}
// IterateObjects iterates across the objects in the miniostorage
@@ -193,13 +219,13 @@ func (m *MinioStorage) IterateObjects(fn func(path string, obj Object) error) er
}) {
object, err := m.client.GetObject(lobjectCtx, m.bucket, mObjInfo.Key, opts)
if err != nil {
return err
return convertMinioErr(err)
}
if err := func(object *minio.Object, fn func(path string, obj Object) error) error {
defer object.Close()
return fn(strings.TrimPrefix(m.basePath, mObjInfo.Key), &minioObject{object})
}(object, fn); err != nil {
return err
return convertMinioErr(err)
}
}
return nil

View File

@@ -12,6 +12,7 @@ import (
"net/url"
"os"
"code.gitea.io/gitea/modules/log"
"code.gitea.io/gitea/modules/setting"
)
@@ -141,21 +142,25 @@ func NewStorage(typStr string, cfg interface{}) (ObjectStorage, error) {
}
func initAvatars() (err error) {
Avatars, err = NewStorage(setting.Avatar.Storage.Type, setting.Avatar.Storage)
log.Info("Initialising Avatar storage with type: %s", setting.Avatar.Storage.Type)
Avatars, err = NewStorage(setting.Avatar.Storage.Type, &setting.Avatar.Storage)
return
}
func initAttachments() (err error) {
Attachments, err = NewStorage(setting.Attachment.Storage.Type, setting.Attachment.Storage)
log.Info("Initialising Attachment storage with type: %s", setting.Attachment.Storage.Type)
Attachments, err = NewStorage(setting.Attachment.Storage.Type, &setting.Attachment.Storage)
return
}
func initLFS() (err error) {
LFS, err = NewStorage(setting.LFS.Storage.Type, setting.LFS.Storage)
log.Info("Initialising LFS storage with type: %s", setting.LFS.Storage.Type)
LFS, err = NewStorage(setting.LFS.Storage.Type, &setting.LFS.Storage)
return
}
func initRepoAvatars() (err error) {
RepoAvatars, err = NewStorage(setting.RepoAvatar.Storage.Type, setting.RepoAvatar.Storage)
log.Info("Initialising Repository Avatar storage with type: %s", setting.RepoAvatar.Storage.Type)
RepoAvatars, err = NewStorage(setting.RepoAvatar.Storage.Type, &setting.RepoAvatar.Storage)
return
}

View File

@@ -5,6 +5,7 @@
package task
import (
"context"
"errors"
"fmt"
"strings"
@@ -15,12 +16,13 @@ import (
"code.gitea.io/gitea/modules/migrations"
migration "code.gitea.io/gitea/modules/migrations/base"
"code.gitea.io/gitea/modules/notification"
"code.gitea.io/gitea/modules/process"
"code.gitea.io/gitea/modules/structs"
"code.gitea.io/gitea/modules/timeutil"
"code.gitea.io/gitea/modules/util"
)
func handleCreateError(owner *models.User, err error, name string) error {
func handleCreateError(owner *models.User, err error) error {
switch {
case models.IsErrReachLimitOfRepo(err):
return fmt.Errorf("You have already reached your limit of %d repositories", owner.MaxCreationLimit())
@@ -38,8 +40,8 @@ func handleCreateError(owner *models.User, err error, name string) error {
func runMigrateTask(t *models.Task) (err error) {
defer func() {
if e := recover(); e != nil {
err = fmt.Errorf("PANIC whilst trying to do migrate task: %v\nStacktrace: %v", err, log.Stack(2))
log.Critical("PANIC during runMigrateTask[%d] by DoerID[%d] to RepoID[%d] for OwnerID[%d]: %v", t.ID, t.DoerID, t.RepoID, t.OwnerID, err)
err = fmt.Errorf("PANIC whilst trying to do migrate task: %v", e)
log.Critical("PANIC during runMigrateTask[%d] by DoerID[%d] to RepoID[%d] for OwnerID[%d]: %v\nStacktrace: %v", t.ID, t.DoerID, t.RepoID, t.OwnerID, e, log.Stack(2))
}
if err == nil {
@@ -55,7 +57,8 @@ func runMigrateTask(t *models.Task) (err error) {
t.EndTime = timeutil.TimeStampNow()
t.Status = structs.TaskStatusFailed
t.Errors = err.Error()
if err := t.UpdateCols("status", "errors", "end_time"); err != nil {
t.RepoID = 0
if err := t.UpdateCols("status", "errors", "repo_id", "end_time"); err != nil {
log.Error("Task UpdateCols failed: %v", err)
}
@@ -66,8 +69,8 @@ func runMigrateTask(t *models.Task) (err error) {
}
}()
if err := t.LoadRepo(); err != nil {
return err
if err = t.LoadRepo(); err != nil {
return
}
// if repository is ready, then just finsih the task
@@ -75,33 +78,43 @@ func runMigrateTask(t *models.Task) (err error) {
return nil
}
if err := t.LoadDoer(); err != nil {
return err
if err = t.LoadDoer(); err != nil {
return
}
if err := t.LoadOwner(); err != nil {
return err
}
t.StartTime = timeutil.TimeStampNow()
t.Status = structs.TaskStatusRunning
if err := t.UpdateCols("start_time", "status"); err != nil {
return err
if err = t.LoadOwner(); err != nil {
return
}
var opts *migration.MigrateOptions
opts, err = t.MigrateConfig()
if err != nil {
return err
return
}
opts.MigrateToRepoID = t.RepoID
repo, err := migrations.MigrateRepository(graceful.GetManager().HammerContext(), t.Doer, t.Owner.Name, *opts)
var repo *models.Repository
ctx, cancel := context.WithCancel(graceful.GetManager().ShutdownContext())
defer cancel()
pm := process.GetManager()
pid := pm.Add(fmt.Sprintf("MigrateTask: %s/%s", t.Owner.Name, opts.RepoName), cancel)
defer pm.Remove(pid)
t.StartTime = timeutil.TimeStampNow()
t.Status = structs.TaskStatusRunning
if err = t.UpdateCols("start_time", "status"); err != nil {
return
}
repo, err = migrations.MigrateRepository(ctx, t.Doer, t.Owner.Name, *opts)
if err == nil {
log.Trace("Repository migrated [%d]: %s/%s", repo.ID, t.Owner.Name, repo.Name)
return nil
return
}
if models.IsErrRepoAlreadyExist(err) {
return errors.New("The repository name is already used")
err = errors.New("The repository name is already used")
return
}
// remoteAddr may contain credentials, so we sanitize it
@@ -113,5 +126,7 @@ func runMigrateTask(t *models.Task) (err error) {
return fmt.Errorf("Migration failed: %v", err.Error())
}
return handleCreateError(t.Owner, err, "MigratePost")
// do not be tempted to coalesce this line with the return
err = handleCreateError(t.Owner, err)
return
}

View File

@@ -17,11 +17,24 @@ import (
type (
// FeishuPayload represents
FeishuPayload struct {
Title string `json:"title"`
Text string `json:"text"`
MsgType string `json:"msg_type"` // text / post / image / share_chat / interactive
Content struct {
Text string `json:"text"`
} `json:"content"`
}
)
func newFeishuTextPayload(text string) *FeishuPayload {
return &FeishuPayload{
MsgType: "text",
Content: struct {
Text string `json:"text"`
}{
Text: text,
},
}
}
// SetSecret sets the Feishu secret
func (f *FeishuPayload) SetSecret(_ string) {}
@@ -42,34 +55,25 @@ var (
func (f *FeishuPayload) Create(p *api.CreatePayload) (api.Payloader, error) {
// created tag/branch
refName := git.RefEndName(p.Ref)
title := fmt.Sprintf("[%s] %s %s created", p.Repo.FullName, p.RefType, refName)
text := fmt.Sprintf("[%s] %s %s created", p.Repo.FullName, p.RefType, refName)
return &FeishuPayload{
Text: title,
Title: title,
}, nil
return newFeishuTextPayload(text), nil
}
// Delete implements PayloadConvertor Delete method
func (f *FeishuPayload) Delete(p *api.DeletePayload) (api.Payloader, error) {
// created tag/branch
refName := git.RefEndName(p.Ref)
title := fmt.Sprintf("[%s] %s %s deleted", p.Repo.FullName, p.RefType, refName)
text := fmt.Sprintf("[%s] %s %s deleted", p.Repo.FullName, p.RefType, refName)
return &FeishuPayload{
Text: title,
Title: title,
}, nil
return newFeishuTextPayload(text), nil
}
// Fork implements PayloadConvertor Fork method
func (f *FeishuPayload) Fork(p *api.ForkPayload) (api.Payloader, error) {
title := fmt.Sprintf("%s is forked to %s", p.Forkee.FullName, p.Repo.FullName)
text := fmt.Sprintf("%s is forked to %s", p.Forkee.FullName, p.Repo.FullName)
return &FeishuPayload{
Text: title,
Title: title,
}, nil
return newFeishuTextPayload(text), nil
}
// Push implements PayloadConvertor Push method
@@ -79,9 +83,7 @@ func (f *FeishuPayload) Push(p *api.PushPayload) (api.Payloader, error) {
commitDesc string
)
title := fmt.Sprintf("[%s:%s] %s", p.Repo.FullName, branchName, commitDesc)
var text string
var text = fmt.Sprintf("[%s:%s] %s\n", p.Repo.FullName, branchName, commitDesc)
// for each commit, generate attachment text
for i, commit := range p.Commits {
var authorName string
@@ -96,40 +98,28 @@ func (f *FeishuPayload) Push(p *api.PushPayload) (api.Payloader, error) {
}
}
return &FeishuPayload{
Text: text,
Title: title,
}, nil
return newFeishuTextPayload(text), nil
}
// Issue implements PayloadConvertor Issue method
func (f *FeishuPayload) Issue(p *api.IssuePayload) (api.Payloader, error) {
text, issueTitle, attachmentText, _ := getIssuesPayloadInfo(p, noneLinkFormatter, true)
return &FeishuPayload{
Text: text + "\r\n\r\n" + attachmentText,
Title: issueTitle,
}, nil
return newFeishuTextPayload(issueTitle + "\r\n" + text + "\r\n\r\n" + attachmentText), nil
}
// IssueComment implements PayloadConvertor IssueComment method
func (f *FeishuPayload) IssueComment(p *api.IssueCommentPayload) (api.Payloader, error) {
text, issueTitle, _ := getIssueCommentPayloadInfo(p, noneLinkFormatter, true)
return &FeishuPayload{
Text: text + "\r\n\r\n" + p.Comment.Body,
Title: issueTitle,
}, nil
return newFeishuTextPayload(issueTitle + "\r\n" + text + "\r\n\r\n" + p.Comment.Body), nil
}
// PullRequest implements PayloadConvertor PullRequest method
func (f *FeishuPayload) PullRequest(p *api.PullRequestPayload) (api.Payloader, error) {
text, issueTitle, attachmentText, _ := getPullRequestPayloadInfo(p, noneLinkFormatter, true)
return &FeishuPayload{
Text: text + "\r\n\r\n" + attachmentText,
Title: issueTitle,
}, nil
return newFeishuTextPayload(issueTitle + "\r\n" + text + "\r\n\r\n" + attachmentText), nil
}
// Review implements PayloadConvertor Review method
@@ -147,28 +137,19 @@ func (f *FeishuPayload) Review(p *api.PullRequestPayload, event models.HookEvent
}
return &FeishuPayload{
Text: title + "\r\n\r\n" + text,
Title: title,
}, nil
return newFeishuTextPayload(title + "\r\n\r\n" + text), nil
}
// Repository implements PayloadConvertor Repository method
func (f *FeishuPayload) Repository(p *api.RepositoryPayload) (api.Payloader, error) {
var title string
var text string
switch p.Action {
case api.HookRepoCreated:
title = fmt.Sprintf("[%s] Repository created", p.Repository.FullName)
return &FeishuPayload{
Text: title,
Title: title,
}, nil
text = fmt.Sprintf("[%s] Repository created", p.Repository.FullName)
return newFeishuTextPayload(text), nil
case api.HookRepoDeleted:
title = fmt.Sprintf("[%s] Repository deleted", p.Repository.FullName)
return &FeishuPayload{
Title: title,
Text: title,
}, nil
text = fmt.Sprintf("[%s] Repository deleted", p.Repository.FullName)
return newFeishuTextPayload(text), nil
}
return nil, nil
@@ -178,10 +159,7 @@ func (f *FeishuPayload) Repository(p *api.RepositoryPayload) (api.Payloader, err
func (f *FeishuPayload) Release(p *api.ReleasePayload) (api.Payloader, error) {
text, _ := getReleasePayloadInfo(p, noneLinkFormatter, true)
return &FeishuPayload{
Text: text,
Title: text,
}, nil
return newFeishuTextPayload(text), nil
}
// GetFeishuPayload converts a ding talk webhook into a FeishuPayload

View File

@@ -511,6 +511,7 @@ add_new_gpg_key=GPG-Schlüssel hinzufügen
key_content_ssh_placeholder=Beginnt mit 'ssh-ed25519', 'ssh-rsa', 'ecdsa-sha2-nistp256', 'ecdsa-sha2-nistp384' oder 'ecdsa-sha2-nistp521'
key_content_gpg_placeholder=Beginnt mit '-----BEGIN PGP PUBLIC KEY BLOCK-----'
ssh_key_been_used=Dieser SSH-Key wird auf diesem Server bereits verwendet.
ssh_key_name_used=Ein gleichnamiger SSH-Key existiert bereits in deinem Account.
gpg_key_id_used=Ein öffentlicher GPG-Schlüssel mit der gleichen ID existiert bereits.
gpg_no_key_email_found=Dieser GPG-Schlüssel kann mit keiner E-Mail-Adresse deines Kontos verwendet werden.
subkeys=Unterschlüssel
@@ -750,6 +751,7 @@ migrate.migrating_failed=Migrieren von <b>%s</b> fehlgeschlagen.
migrate.github.description=Migriere Daten von Github.com oder Github Enterprise.
migrate.git.description=Migriere oder spiegele git-Daten von Git-Services
migrate.gitlab.description=Migriere Daten von GitLab.com oder einem selbst gehostetem gitlab Server.
migrate.gitea.description=Migriere Daten von Gitea.com oder einem selbst gehostetem Gitea Server.
mirror_from=Mirror von
forked_from=geforkt von
@@ -1219,6 +1221,8 @@ pulls.required_status_check_administrator=Als Administrator kannst du diesen Pul
pulls.blocked_by_approvals=Dieser Pull-Request hat noch nicht genügend Zustimmungen. %d von %d Zustimmungen erteilt.
pulls.blocked_by_rejection=Dieser Pull-Request hat Änderungen, die von einem offiziellen Reviewer angefragt wurden.
pulls.blocked_by_outdated_branch=Dieser Pull Request ist blockiert, da er veraltet ist.
pulls.blocked_by_changed_protected_files_1=Diese Pull Request ist blockiert, weil er eine geschützte Datei ändert:
pulls.blocked_by_changed_protected_files_n=Diese Pull Request ist blockiert, weil er geschützte Dateien ändert:
pulls.can_auto_merge_desc=Dieser Pull-Request kann automatisch zusammengeführt werden.
pulls.cannot_auto_merge_desc=Dieser Pull-Request kann nicht automatisch zusammengeführt werden, da es Konflikte gibt.
pulls.cannot_auto_merge_helper=Bitte manuell zusammenführen, um die Konflikte zu lösen.
@@ -1766,6 +1770,7 @@ diff.review.comment=Kommentieren
diff.review.approve=Genehmigen
diff.review.reject=Änderung anfragen
diff.committed_by=committed von
diff.protected=Geschützt
releases.desc=Behalte den Überblick über Versionen und Downloads.
release.releases=Releases
@@ -1991,6 +1996,7 @@ dashboard.update_migration_poster_id=Migration Poster-IDs updaten
dashboard.git_gc_repos=Garbage-Collection auf Repositories ausführen
dashboard.resync_all_sshkeys=Die Datei '.ssh/authorized_keys' mit Gitea SSH-Schlüsseln aktualisieren.
dashboard.resync_all_sshkeys.desc=(Nicht benötigt für den eingebauten SSH-Server.)
dashboard.resync_all_sshprincipals.desc=(Nicht benötigt für den eingebauten SSH-Server.)
dashboard.resync_all_hooks=Synchronisiere „pre-receive“-, „update“- und „post-receive“-Hooks für alle Repositories erneut.
dashboard.reinit_missing_repos=Alle Git-Repositories mit Einträgen neu einlesen
dashboard.sync_external_users=Externe Benutzerdaten synchronisieren

View File

@@ -366,6 +366,7 @@ org_name_been_taken = The organization name is already taken.
team_name_been_taken = The team name is already taken.
team_no_units_error = Allow access to at least one repository section.
email_been_used = The email address is already used.
email_invalid = The email address is invalid.
openid_been_used = The OpenID address '%s' is already used.
username_password_incorrect = Username or password is incorrect.
password_complexity = Password does not pass complexity requirements:
@@ -870,9 +871,11 @@ editor.file_already_exists = A file named '%s' already exists in this repository
editor.commit_empty_file_header = Commit an empty file
editor.commit_empty_file_text = The file you're about to commit is empty. Proceed?
editor.no_changes_to_show = There are no changes to show.
editor.fail_to_update_file = Failed to update/create file '%s' with error: %v
editor.fail_to_update_file = Failed to update/create file '%s'.
editor.fail_to_update_file_summary = Error Message:
editor.push_rejected_no_message = The change was rejected by the server without a message. Please check githooks.
editor.push_rejected = The change was rejected by the server with the following message:<br>%s<br> Please check githooks.
editor.push_rejected = The change was rejected by the server. Please check githooks.
editor.push_rejected_summary = Full Rejection Message:
editor.add_subdir = Add a directory…
editor.unable_to_upload_files = Failed to upload files to '%s' with error: %v
editor.upload_file_is_locked = File '%s' is locked by %s.
@@ -1190,6 +1193,7 @@ issues.review.remove_review_request_self = "refused to review %s"
issues.review.pending = Pending
issues.review.review = Review
issues.review.reviewers = Reviewers
issues.review.outdated = Outdated
issues.review.show_outdated = Show outdated
issues.review.hide_outdated = Hide outdated
issues.review.show_resolved = Show resolved
@@ -1258,11 +1262,15 @@ pulls.rebase_merge_commit_pull_request = Rebase and Merge (--no-ff)
pulls.squash_merge_pull_request = Squash and Merge
pulls.require_signed_wont_sign = The branch requires signed commits but this merge will not be signed
pulls.invalid_merge_option = You cannot use this merge option for this pull request.
pulls.merge_conflict = Merge Failed: There was a conflict whilst merging: %[1]s<br>%[2]s<br>Hint: Try a different strategy
pulls.rebase_conflict = Merge Failed: There was a conflict whilst rebasing commit: %[1]s<br>%[2]s<br>%[3]s<br>Hint:Try a different strategy
pulls.merge_conflict = Merge Failed: There was a conflict whilst merging. Hint: Try a different strategy
pulls.merge_conflict_summary = Error Message
pulls.rebase_conflict = Merge Failed: There was a conflict whilst rebasing commit: %[1]s. Hint: Try a different strategy
pulls.rebase_conflict_summary = Error Message
; </summary><code>%[2]s<br>%[3]s</code></details>
pulls.unrelated_histories = Merge Failed: The merge head and base do not share a common history. Hint: Try a different strategy
pulls.merge_out_of_date = Merge Failed: Whilst generating the merge, the base was updated. Hint: Try again.
pulls.push_rejected = Merge Failed: The push was rejected with the following message:<br>%s<br>Review the githooks for this repository
pulls.push_rejected = Merge Failed: The push was rejected. Review the githooks for this repository.
pulls.push_rejected_summary = Full Rejection Message
pulls.push_rejected_no_message = Merge Failed: The push was rejected but there was no remote message.<br>Review the githooks for this repository
pulls.open_unmerged_pull_exists = `You cannot perform a reopen operation because there is a pending pull request (#%d) with identical properties.`
pulls.status_checking = Some checks are pending

View File

@@ -1037,8 +1037,7 @@ issues.close_comment_issue=Commenta e Chiudi
issues.reopen_issue=Riapri
issues.reopen_comment_issue=Commenta e Riapri
issues.create_comment=Commento
issues.closed_at="`chiuso questo probleam <a id=\"%[1]s\" href=\"#%[1]s\">%[2]s</a>`
Contextrequest"
issues.closed_at=`chiuso questo probleam <a id="%[1]s" href="#%[1]s">%[2]s</a>`
issues.reopened_at=`riaperto questo problema <a id="%[1]s" href="#%[1]s">%[2]s</a>`
issues.commit_ref_at=`ha fatto riferimento a questa issue dal commit <a id="%[1]s" href="#%[1]s">%[2]s</a>`
issues.ref_issue_from=`<a href="%[3]s">ha fatto riferimento a questo problema %[4]s</a> <a id="%[1]s" href="#%[1]s">%[2]s</a>`

View File

@@ -1501,6 +1501,7 @@ settings.trust_model.committer.long=Revīzijas iesūtītāja: Uzticēties paraks
settings.trust_model.committer.desc=Ticami paraksti tiks atzīmēti kā "uzticami", ja tie atbilst revīzijas iesūtītājam, citos gadījumos tie tiks atzīmēti kā "nesakrītoši". Šis nozīmē, ka Gitea būs kā revīzijas iesūtītājs parakstītām revīzijām, kur īstais revīzijas iesūtītājs tiks atīzmēts revīzijas komentāra beigās ar tekstu Co-Authored-By: un Co-Committed-By:. Noklusētajai Gitea atslēgai ir jāatbilst lietotājam datu bāzē.
settings.trust_model.collaboratorcommitter=Līdzstrādnieka un revīzijas iesūtītāja
settings.trust_model.collaboratorcommitter.long=Līdzstrādnieka un revīzijas iesūtītāja: Uzticēties līdzstrādnieku parakstiem, kas atbilst revīzijas iesūtītājam
settings.trust_model.collaboratorcommitter.desc=Ticami līdzstrādnieku paraksti tiks atzīmēti kā "uzticami", ja tie atbilst revīzijas iesūtītājam, citos gadījumos tie tiks atzīmēti kā "neuzticami", ja paraksts atbilst revīzijas iesūtītajam, vai "nesakrītoši", ja neatbilst. Šis nozīmē, ka Gitea būs kā revīzijas iesūtītājs parakstītām revīzijām, kur īstais revīzijas iesūtītājs tiks atīzmēts revīzijas komentāra beigās ar tekstu Co-Authored-By: un Co-Committed-By:. Noklusētajai Gitea atslēgai ir jāatbilst lietotājam datu bāzē.
settings.wiki_delete=Dzēst vikivietnes datus
settings.wiki_delete_desc=Vikivietnes repozitorija dzēšana ir <strong>NEATGRIEZENISKA</strong>. Vai turpināt?
settings.wiki_delete_notices_1=- Šī darbība dzēsīs un atspējos repozitorija %s vikivietni.

BIN
public/img/failed.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 11 KiB

View File

@@ -13,6 +13,7 @@ import (
"code.gitea.io/gitea/modules/auth"
"code.gitea.io/gitea/modules/auth/ldap"
"code.gitea.io/gitea/modules/auth/oauth2"
"code.gitea.io/gitea/modules/auth/pam"
"code.gitea.io/gitea/modules/base"
"code.gitea.io/gitea/modules/context"
"code.gitea.io/gitea/modules/log"
@@ -57,14 +58,20 @@ type dropdownItem struct {
}
var (
authSources = []dropdownItem{
{models.LoginNames[models.LoginLDAP], models.LoginLDAP},
{models.LoginNames[models.LoginDLDAP], models.LoginDLDAP},
{models.LoginNames[models.LoginSMTP], models.LoginSMTP},
{models.LoginNames[models.LoginPAM], models.LoginPAM},
{models.LoginNames[models.LoginOAuth2], models.LoginOAuth2},
{models.LoginNames[models.LoginSSPI], models.LoginSSPI},
}
authSources = func() []dropdownItem {
items := []dropdownItem{
{models.LoginNames[models.LoginLDAP], models.LoginLDAP},
{models.LoginNames[models.LoginDLDAP], models.LoginDLDAP},
{models.LoginNames[models.LoginSMTP], models.LoginSMTP},
{models.LoginNames[models.LoginOAuth2], models.LoginOAuth2},
{models.LoginNames[models.LoginSSPI], models.LoginSSPI},
}
if pam.Supported {
items = append(items, dropdownItem{models.LoginNames[models.LoginPAM], models.LoginPAM})
}
return items
}()
securityProtocols = []dropdownItem{
{models.SecurityProtocolNames[ldap.SecurityProtocolUnencrypted], ldap.SecurityProtocolUnencrypted},
{models.SecurityProtocolNames[ldap.SecurityProtocolLDAPS], ldap.SecurityProtocolLDAPS},

View File

@@ -5,6 +5,8 @@
package admin
import (
"net/url"
"strconv"
"strings"
"code.gitea.io/gitea/models"
@@ -71,6 +73,8 @@ func UnadoptedRepos(ctx *context.Context) {
opts.Page = 1
}
ctx.Data["CurrentPage"] = opts.Page
doSearch := ctx.QueryBool("search")
ctx.Data["search"] = doSearch
@@ -79,6 +83,7 @@ func UnadoptedRepos(ctx *context.Context) {
if !doSearch {
pager := context.NewPagination(0, opts.PageSize, opts.Page, 5)
pager.SetDefaultParams(ctx)
pager.AddParam(ctx, "search", "search")
ctx.Data["Page"] = pager
ctx.HTML(200, tplUnadoptedRepos)
return
@@ -92,6 +97,7 @@ func UnadoptedRepos(ctx *context.Context) {
ctx.Data["Dirs"] = repoNames
pager := context.NewPagination(int(count), opts.PageSize, opts.Page, 5)
pager.SetDefaultParams(ctx)
pager.AddParam(ctx, "search", "search")
ctx.Data["Page"] = pager
ctx.HTML(200, tplUnadoptedRepos)
}
@@ -100,6 +106,9 @@ func UnadoptedRepos(ctx *context.Context) {
func AdoptOrDeleteRepository(ctx *context.Context) {
dir := ctx.Query("id")
action := ctx.Query("action")
page := ctx.QueryInt("page")
q := ctx.Query("q")
dirSplit := strings.SplitN(dir, "/", 2)
if len(dirSplit) != 2 {
ctx.Redirect(setting.AppSubURL + "/admin/repos")
@@ -141,5 +150,5 @@ func AdoptOrDeleteRepository(ctx *context.Context) {
}
ctx.Flash.Success(ctx.Tr("repo.delete_preexisting_success", dir))
}
ctx.Redirect(setting.AppSubURL + "/admin/repos/unadopted")
ctx.Redirect(setting.AppSubURL + "/admin/repos/unadopted?search=true&q=" + url.QueryEscape(q) + "&page=" + strconv.Itoa(page))
}

View File

@@ -129,6 +129,9 @@ func NewUserPost(ctx *context.Context, form auth.AdminCreateUserForm) {
case models.IsErrEmailAlreadyUsed(err):
ctx.Data["Err_Email"] = true
ctx.RenderWithErr(ctx.Tr("form.email_been_used"), tplUserNew, &form)
case models.IsErrEmailInvalid(err):
ctx.Data["Err_Email"] = true
ctx.RenderWithErr(ctx.Tr("form.email_invalid"), tplUserNew, &form)
case models.IsErrNameReserved(err):
ctx.Data["Err_UserName"] = true
ctx.RenderWithErr(ctx.Tr("user.form.name_reserved", err.(models.ErrNameReserved).Name), tplUserNew, &form)
@@ -277,6 +280,9 @@ func EditUserPost(ctx *context.Context, form auth.AdminEditUserForm) {
if models.IsErrEmailAlreadyUsed(err) {
ctx.Data["Err_Email"] = true
ctx.RenderWithErr(ctx.Tr("form.email_been_used"), tplUserEdit, &form)
} else if models.IsErrEmailInvalid(err) {
ctx.Data["Err_Email"] = true
ctx.RenderWithErr(ctx.Tr("form.email_invalid"), tplUserEdit, &form)
} else {
ctx.ServerError("UpdateUser", err)
}

View File

@@ -87,3 +87,33 @@ func TestNewUserPost_MustChangePasswordFalse(t *testing.T) {
assert.Equal(t, email, u.Email)
assert.False(t, u.MustChangePassword)
}
func TestNewUserPost_InvalidEmail(t *testing.T) {
models.PrepareTestEnv(t)
ctx := test.MockContext(t, "admin/users/new")
u := models.AssertExistsAndLoadBean(t, &models.User{
IsAdmin: true,
ID: 2,
}).(*models.User)
ctx.User = u
username := "gitea"
email := "gitea@gitea.io\r\n"
form := auth.AdminCreateUserForm{
LoginType: "local",
LoginName: "local",
UserName: username,
Email: email,
Password: "abc123ABC!=$",
SendNotify: false,
MustChangePassword: false,
}
NewUserPost(ctx, form)
assert.NotEmpty(t, ctx.Flash.ErrorMsg)
}

View File

@@ -101,6 +101,7 @@ func CreateUser(ctx *context.APIContext, form api.CreateUserOption) {
models.IsErrEmailAlreadyUsed(err) ||
models.IsErrNameReserved(err) ||
models.IsErrNameCharsNotAllowed(err) ||
models.IsErrEmailInvalid(err) ||
models.IsErrNamePatternNotAllowed(err) {
ctx.Error(http.StatusUnprocessableEntity, "", err)
} else {
@@ -208,7 +209,7 @@ func EditUser(ctx *context.APIContext, form api.EditUserOption) {
}
if err := models.UpdateUser(u); err != nil {
if models.IsErrEmailAlreadyUsed(err) {
if models.IsErrEmailAlreadyUsed(err) || models.IsErrEmailInvalid(err) {
ctx.Error(http.StatusUnprocessableEntity, "", err)
} else {
ctx.Error(http.StatusInternalServerError, "UpdateUser", err)

View File

@@ -191,14 +191,14 @@ func reqToken() macaron.Handler {
ctx.RequireCSRF()
return
}
ctx.Context.Error(http.StatusUnauthorized)
ctx.Error(http.StatusUnauthorized, "reqToken", "token is required")
}
}
func reqBasicAuth() macaron.Handler {
return func(ctx *context.APIContext) {
if !ctx.Context.IsBasicAuth {
ctx.Context.Error(http.StatusUnauthorized)
ctx.Error(http.StatusUnauthorized, "reqBasicAuth", "basic auth required")
return
}
ctx.CheckForOTP()
@@ -207,9 +207,9 @@ func reqBasicAuth() macaron.Handler {
// reqSiteAdmin user should be the site admin
func reqSiteAdmin() macaron.Handler {
return func(ctx *context.Context) {
return func(ctx *context.APIContext) {
if !ctx.IsUserSiteAdmin() {
ctx.Error(http.StatusForbidden)
ctx.Error(http.StatusForbidden, "reqSiteAdmin", "user should be the site admin")
return
}
}
@@ -217,9 +217,9 @@ func reqSiteAdmin() macaron.Handler {
// reqOwner user should be the owner of the repo or site admin.
func reqOwner() macaron.Handler {
return func(ctx *context.Context) {
return func(ctx *context.APIContext) {
if !ctx.IsUserRepoOwner() && !ctx.IsUserSiteAdmin() {
ctx.Error(http.StatusForbidden)
ctx.Error(http.StatusForbidden, "reqOwner", "user should be the owner of the repo")
return
}
}
@@ -227,9 +227,9 @@ func reqOwner() macaron.Handler {
// reqAdmin user should be an owner or a collaborator with admin write of a repository, or site admin
func reqAdmin() macaron.Handler {
return func(ctx *context.Context) {
return func(ctx *context.APIContext) {
if !ctx.IsUserRepoAdmin() && !ctx.IsUserSiteAdmin() {
ctx.Error(http.StatusForbidden)
ctx.Error(http.StatusForbidden, "reqAdmin", "user should be an owner or a collaborator with admin write of a repository")
return
}
}
@@ -237,9 +237,9 @@ func reqAdmin() macaron.Handler {
// reqRepoWriter user should have a permission to write to a repo, or be a site admin
func reqRepoWriter(unitTypes ...models.UnitType) macaron.Handler {
return func(ctx *context.Context) {
return func(ctx *context.APIContext) {
if !ctx.IsUserRepoWriter(unitTypes) && !ctx.IsUserRepoAdmin() && !ctx.IsUserSiteAdmin() {
ctx.Error(http.StatusForbidden)
ctx.Error(http.StatusForbidden, "reqRepoWriter", "user should have a permission to write to a repo")
return
}
}
@@ -247,9 +247,9 @@ func reqRepoWriter(unitTypes ...models.UnitType) macaron.Handler {
// reqRepoReader user should have specific read permission or be a repo admin or a site admin
func reqRepoReader(unitType models.UnitType) macaron.Handler {
return func(ctx *context.Context) {
return func(ctx *context.APIContext) {
if !ctx.IsUserRepoReaderSpecific(unitType) && !ctx.IsUserRepoAdmin() && !ctx.IsUserSiteAdmin() {
ctx.Error(http.StatusForbidden)
ctx.Error(http.StatusForbidden, "reqRepoReader", "user should have specific read permission or be a repo admin or a site admin")
return
}
}
@@ -257,9 +257,9 @@ func reqRepoReader(unitType models.UnitType) macaron.Handler {
// reqAnyRepoReader user should have any permission to read repository or permissions of site admin
func reqAnyRepoReader() macaron.Handler {
return func(ctx *context.Context) {
return func(ctx *context.APIContext) {
if !ctx.IsUserRepoReaderAny() && !ctx.IsUserSiteAdmin() {
ctx.Error(http.StatusForbidden)
ctx.Error(http.StatusForbidden, "reqAnyRepoReader", "user should have any permission to read repository or permissions of site admin")
return
}
}
@@ -502,7 +502,6 @@ func mustNotBeArchived(ctx *context.APIContext) {
}
// RegisterRoutes registers all v1 APIs routes to web application.
// FIXME: custom form error response
func RegisterRoutes(m *macaron.Macaron) {
bind := binding.Bind
@@ -641,7 +640,7 @@ func RegisterRoutes(m *macaron.Macaron) {
m.Group("/:username/:reponame", func() {
m.Combo("").Get(reqAnyRepoReader(), repo.Get).
Delete(reqToken(), reqOwner(), repo.Delete).
Patch(reqToken(), reqAdmin(), bind(api.EditRepoOption{}), context.RepoRef(), repo.Edit)
Patch(reqToken(), reqAdmin(), bind(api.EditRepoOption{}), context.RepoRefForAPI(), repo.Edit)
m.Post("/transfer", reqOwner(), bind(api.TransferRepoOption{}), repo.Transfer)
m.Combo("/notifications").
Get(reqToken(), notify.ListRepoNotifications).
@@ -653,7 +652,7 @@ func RegisterRoutes(m *macaron.Macaron) {
m.Combo("").Get(repo.GetHook).
Patch(bind(api.EditHookOption{}), repo.EditHook).
Delete(repo.DeleteHook)
m.Post("/tests", context.RepoRef(), repo.TestHook)
m.Post("/tests", context.RepoRefForAPI(), repo.TestHook)
})
m.Group("/git", func() {
m.Combo("").Get(repo.ListGitHooks)
@@ -670,14 +669,14 @@ func RegisterRoutes(m *macaron.Macaron) {
Put(reqAdmin(), bind(api.AddCollaboratorOption{}), repo.AddCollaborator).
Delete(reqAdmin(), repo.DeleteCollaborator)
}, reqToken())
m.Get("/raw/*", context.RepoRefByType(context.RepoRefAny), reqRepoReader(models.UnitTypeCode), repo.GetRawFile)
m.Get("/raw/*", context.RepoRefForAPI(), reqRepoReader(models.UnitTypeCode), repo.GetRawFile)
m.Get("/archive/*", reqRepoReader(models.UnitTypeCode), repo.GetArchive)
m.Combo("/forks").Get(repo.ListForks).
Post(reqToken(), reqRepoReader(models.UnitTypeCode), bind(api.CreateForkOption{}), repo.CreateFork)
m.Group("/branches", func() {
m.Get("", repo.ListBranches)
m.Get("/*", context.RepoRefByType(context.RepoRefBranch), repo.GetBranch)
m.Delete("/*", reqRepoWriter(models.UnitTypeCode), context.RepoRefByType(context.RepoRefBranch), repo.DeleteBranch)
m.Get("/*", repo.GetBranch)
m.Delete("/*", context.ReferencesGitRepo(false), reqRepoWriter(models.UnitTypeCode), repo.DeleteBranch)
m.Post("", reqRepoWriter(models.UnitTypeCode), bind(api.CreateBranchRepoOption{}), repo.CreateBranch)
}, reqRepoReader(models.UnitTypeCode))
m.Group("/branch_protections", func() {
@@ -802,7 +801,7 @@ func RegisterRoutes(m *macaron.Macaron) {
})
}, reqRepoReader(models.UnitTypeReleases))
m.Post("/mirror-sync", reqToken(), reqRepoWriter(models.UnitTypeCode), repo.MirrorSync)
m.Get("/editorconfig/:filename", context.RepoRef(), reqRepoReader(models.UnitTypeCode), repo.GetEditorconfig)
m.Get("/editorconfig/:filename", context.RepoRefForAPI(), reqRepoReader(models.UnitTypeCode), repo.GetEditorconfig)
m.Group("/pulls", func() {
m.Combo("").Get(bind(api.ListPullRequestsOptions{}), repo.ListPullRequests).
Post(reqToken(), mustNotBeArchived, bind(api.CreatePullRequestOption{}), repo.CreatePullRequest)
@@ -847,9 +846,9 @@ func RegisterRoutes(m *macaron.Macaron) {
})
m.Get("/refs", repo.GetGitAllRefs)
m.Get("/refs/*", repo.GetGitRefs)
m.Get("/trees/:sha", context.RepoRef(), repo.GetTree)
m.Get("/blobs/:sha", context.RepoRef(), repo.GetBlob)
m.Get("/tags/:sha", context.RepoRef(), repo.GetTag)
m.Get("/trees/:sha", context.RepoRefForAPI(), repo.GetTree)
m.Get("/blobs/:sha", context.RepoRefForAPI(), repo.GetBlob)
m.Get("/tags/:sha", context.RepoRefForAPI(), repo.GetTag)
}, reqRepoReader(models.UnitTypeCode))
m.Group("/contents", func() {
m.Get("", repo.GetContentsList)

View File

@@ -101,7 +101,7 @@ func ListRepoNotifications(ctx *context.APIContext) {
before, since, err := utils.GetQueryBeforeSince(ctx)
if err != nil {
ctx.InternalServerError(err)
ctx.Error(http.StatusUnprocessableEntity, "GetQueryBeforeSince", err)
return
}
opts := models.FindNotificationOptions{

View File

@@ -63,7 +63,7 @@ func ListNotifications(ctx *context.APIContext) {
before, since, err := utils.GetQueryBeforeSince(ctx)
if err != nil {
ctx.InternalServerError(err)
ctx.Error(http.StatusUnprocessableEntity, "GetQueryBeforeSince", err)
return
}
opts := models.FindNotificationOptions{

View File

@@ -17,19 +17,28 @@ import (
"code.gitea.io/gitea/routers/api/v1/utils"
)
func listUserOrgs(ctx *context.APIContext, u *models.User, all bool) {
if err := u.GetOrganizations(&models.SearchOrganizationsOptions{
ListOptions: utils.GetListOptions(ctx),
All: all,
}); err != nil {
ctx.Error(http.StatusInternalServerError, "GetOrganizations", err)
func listUserOrgs(ctx *context.APIContext, u *models.User) {
listOptions := utils.GetListOptions(ctx)
showPrivate := ctx.IsSigned && (ctx.User.IsAdmin || ctx.User.ID == u.ID)
orgs, err := models.GetOrgsByUserID(u.ID, showPrivate)
if err != nil {
ctx.Error(http.StatusInternalServerError, "GetOrgsByUserID", err)
return
}
maxResults := len(orgs)
apiOrgs := make([]*api.Organization, len(u.Orgs))
for i := range u.Orgs {
apiOrgs[i] = convert.ToOrganization(u.Orgs[i])
orgs = utils.PaginateUserSlice(orgs, listOptions.Page, listOptions.PageSize)
apiOrgs := make([]*api.Organization, len(orgs))
for i := range orgs {
apiOrgs[i] = convert.ToOrganization(orgs[i])
}
ctx.SetLinkHeader(int(maxResults), listOptions.PageSize)
ctx.Header().Set("X-Total-Count", fmt.Sprintf("%d", maxResults))
ctx.Header().Set("Access-Control-Expose-Headers", "X-Total-Count, Link")
ctx.JSON(http.StatusOK, &apiOrgs)
}
@@ -53,7 +62,7 @@ func ListMyOrgs(ctx *context.APIContext) {
// "200":
// "$ref": "#/responses/OrganizationList"
listUserOrgs(ctx, ctx.User, true)
listUserOrgs(ctx, ctx.User)
}
// ListUserOrgs list user's orgs
@@ -85,7 +94,7 @@ func ListUserOrgs(ctx *context.APIContext) {
if ctx.Written() {
return
}
listUserOrgs(ctx, u, ctx.User != nil && (ctx.User.IsAdmin || ctx.User.ID == u.ID))
listUserOrgs(ctx, u)
}
// GetAll return list of all public organizations

View File

@@ -46,15 +46,12 @@ func GetBranch(ctx *context.APIContext) {
// responses:
// "200":
// "$ref": "#/responses/Branch"
// "404":
// "$ref": "#/responses/notFound"
if ctx.Repo.TreePath != "" {
// if TreePath != "", then URL contained extra slashes
// (i.e. "master/subbranch" instead of "master"), so branch does
// not exist
ctx.NotFound()
return
}
branch, err := repo_module.GetBranch(ctx.Repo.Repository, ctx.Repo.BranchName)
branchName := ctx.Params("*")
branch, err := repo_module.GetBranch(ctx.Repo.Repository, branchName)
if err != nil {
if git.IsErrBranchNotExist(err) {
ctx.NotFound(err)
@@ -70,7 +67,7 @@ func GetBranch(ctx *context.APIContext) {
return
}
branchProtection, err := ctx.Repo.Repository.GetBranchProtection(ctx.Repo.BranchName)
branchProtection, err := ctx.Repo.Repository.GetBranchProtection(branchName)
if err != nil {
ctx.Error(http.StatusInternalServerError, "GetBranchProtection", err)
return
@@ -113,21 +110,17 @@ func DeleteBranch(ctx *context.APIContext) {
// "$ref": "#/responses/empty"
// "403":
// "$ref": "#/responses/error"
// "404":
// "$ref": "#/responses/notFound"
if ctx.Repo.TreePath != "" {
// if TreePath != "", then URL contained extra slashes
// (i.e. "master/subbranch" instead of "master"), so branch does
// not exist
ctx.NotFound()
return
}
branchName := ctx.Params("*")
if ctx.Repo.Repository.DefaultBranch == ctx.Repo.BranchName {
if ctx.Repo.Repository.DefaultBranch == branchName {
ctx.Error(http.StatusForbidden, "DefaultBranch", fmt.Errorf("can not delete default branch"))
return
}
isProtected, err := ctx.Repo.Repository.IsProtectedBranch(ctx.Repo.BranchName, ctx.User)
isProtected, err := ctx.Repo.Repository.IsProtectedBranch(branchName, ctx.User)
if err != nil {
ctx.InternalServerError(err)
return
@@ -137,7 +130,7 @@ func DeleteBranch(ctx *context.APIContext) {
return
}
branch, err := repo_module.GetBranch(ctx.Repo.Repository, ctx.Repo.BranchName)
branch, err := repo_module.GetBranch(ctx.Repo.Repository, branchName)
if err != nil {
if git.IsErrBranchNotExist(err) {
ctx.NotFound(err)
@@ -153,7 +146,7 @@ func DeleteBranch(ctx *context.APIContext) {
return
}
if err := ctx.Repo.GitRepo.DeleteBranch(ctx.Repo.BranchName, git.DeleteBranchOptions{
if err := ctx.Repo.GitRepo.DeleteBranch(branchName, git.DeleteBranchOptions{
Force: true,
}); err != nil {
ctx.Error(http.StatusInternalServerError, "DeleteBranch", err)
@@ -163,7 +156,7 @@ func DeleteBranch(ctx *context.APIContext) {
// Don't return error below this
if err := repo_service.PushUpdate(
&repo_service.PushUpdateOptions{
RefFullName: git.BranchPrefix + ctx.Repo.BranchName,
RefFullName: git.BranchPrefix + branchName,
OldCommitID: c.ID.String(),
NewCommitID: git.EmptySHA,
PusherID: ctx.User.ID,
@@ -174,7 +167,7 @@ func DeleteBranch(ctx *context.APIContext) {
log.Error("Update: %v", err)
}
if err := ctx.Repo.Repository.AddDeletedBranch(ctx.Repo.BranchName, c.ID.String(), ctx.User.ID); err != nil {
if err := ctx.Repo.Repository.AddDeletedBranch(branchName, c.ID.String(), ctx.User.ID); err != nil {
log.Warn("AddDeletedBranch: %v", err)
}

View File

@@ -56,7 +56,7 @@ func ListIssueComments(ctx *context.APIContext) {
before, since, err := utils.GetQueryBeforeSince(ctx)
if err != nil {
ctx.Error(http.StatusInternalServerError, "GetQueryBeforeSince", err)
ctx.Error(http.StatusUnprocessableEntity, "GetQueryBeforeSince", err)
return
}
issue, err := models.GetIssueByIndex(ctx.Repo.Repository.ID, ctx.ParamsInt64(":index"))
@@ -132,7 +132,7 @@ func ListRepoIssueComments(ctx *context.APIContext) {
before, since, err := utils.GetQueryBeforeSince(ctx)
if err != nil {
ctx.Error(http.StatusInternalServerError, "GetQueryBeforeSince", err)
ctx.Error(http.StatusUnprocessableEntity, "GetQueryBeforeSince", err)
return
}

View File

@@ -56,7 +56,11 @@ func GetIssueCommentReactions(ctx *context.APIContext) {
return
}
if !ctx.Repo.CanRead(models.UnitTypeIssues) {
if err := comment.LoadIssue(); err != nil {
ctx.Error(http.StatusInternalServerError, "comment.LoadIssue", err)
}
if !ctx.Repo.CanReadIssuesOrPulls(comment.Issue.IsPull) {
ctx.Error(http.StatusForbidden, "GetIssueCommentReactions", errors.New("no permission to get reactions"))
return
}
@@ -270,7 +274,7 @@ func GetIssueReactions(ctx *context.APIContext) {
return
}
if !ctx.Repo.CanRead(models.UnitTypeIssues) {
if !ctx.Repo.CanReadIssuesOrPulls(issue.IsPull) {
ctx.Error(http.StatusForbidden, "GetIssueReactions", errors.New("no permission to get reactions"))
return
}

View File

@@ -86,7 +86,7 @@ func ListTrackedTimes(ctx *context.APIContext) {
}
if opts.CreatedBeforeUnix, opts.CreatedAfterUnix, err = utils.GetQueryBeforeSince(ctx); err != nil {
ctx.InternalServerError(err)
ctx.Error(http.StatusUnprocessableEntity, "GetQueryBeforeSince", err)
return
}
@@ -491,7 +491,7 @@ func ListTrackedTimesByRepository(ctx *context.APIContext) {
var err error
if opts.CreatedBeforeUnix, opts.CreatedAfterUnix, err = utils.GetQueryBeforeSince(ctx); err != nil {
ctx.InternalServerError(err)
ctx.Error(http.StatusUnprocessableEntity, "GetQueryBeforeSince", err)
return
}
@@ -554,7 +554,7 @@ func ListMyTrackedTimes(ctx *context.APIContext) {
var err error
if opts.CreatedBeforeUnix, opts.CreatedAfterUnix, err = utils.GetQueryBeforeSince(ctx); err != nil {
ctx.InternalServerError(err)
ctx.Error(http.StatusUnprocessableEntity, "GetQueryBeforeSince", err)
return
}

View File

@@ -212,6 +212,8 @@ func handleMigrateError(ctx *context.APIContext, repoOwner *models.User, remoteA
ctx.Error(http.StatusUnprocessableEntity, "", fmt.Sprintf("The username '%s' contains invalid characters.", err.(models.ErrNameCharsNotAllowed).Name))
case models.IsErrNamePatternNotAllowed(err):
ctx.Error(http.StatusUnprocessableEntity, "", fmt.Sprintf("The pattern '%s' is not allowed in a username.", err.(models.ErrNamePatternNotAllowed).Pattern))
case models.IsErrMigrationNotAllowed(err):
ctx.Error(http.StatusUnprocessableEntity, "", err)
default:
err = util.URLSanitizedError(err, remoteAddr)
if strings.Contains(err.Error(), "Authentication failed") ||

View File

@@ -284,6 +284,12 @@ func CreatePullRequest(ctx *context.APIContext, form api.CreatePullRequestOption
// "422":
// "$ref": "#/responses/validationError"
if form.Head == form.Base {
ctx.Error(http.StatusUnprocessableEntity, "BaseHeadSame",
"Invalid PullRequest: There are no changes between the head and the base")
return
}
var (
repo = ctx.Repo.Repository
labelIDs []int64

View File

@@ -244,7 +244,7 @@ type combinedCommitStatus struct {
// GetCombinedCommitStatusByRef returns the combined status for any given commit hash
func GetCombinedCommitStatusByRef(ctx *context.APIContext) {
// swagger:operation GET /repos/{owner}/{repo}/commits/{ref}/statuses repository repoGetCombinedStatusByRef
// swagger:operation GET /repos/{owner}/{repo}/commits/{ref}/status repository repoGetCombinedStatusByRef
// ---
// summary: Get a commit's combined status, by branch/tag/commit reference
// produces:
@@ -272,7 +272,7 @@ func GetCombinedCommitStatusByRef(ctx *context.APIContext) {
// required: false
// responses:
// "200":
// "$ref": "#/responses/Status"
// "$ref": "#/responses/CombinedStatus"
// "400":
// "$ref": "#/responses/error"
@@ -292,7 +292,7 @@ func GetCombinedCommitStatusByRef(ctx *context.APIContext) {
}
if len(statuses) == 0 {
ctx.Status(http.StatusOK)
ctx.JSON(http.StatusOK, &api.CombinedStatus{})
return
}

View File

@@ -309,3 +309,10 @@ type swaggerLanguageStatistics struct {
// in: body
Body map[string]int64 `json:"body"`
}
// CombinedStatus
// swagger:response CombinedStatus
type swaggerCombinedStatus struct {
// in: body
Body api.CombinedStatus `json:"body"`
}

Some files were not shown because too many files have changed in this diff Show More